[jira] [Created] (HIVE-19370) Issue: ADD Months function on timestamp datatype fields in hive

2018-04-30 Thread Amit Chauhan (JIRA)
Amit Chauhan created HIVE-19370:
---

 Summary: Issue: ADD Months function on timestamp datatype fields 
in hive
 Key: HIVE-19370
 URL: https://issues.apache.org/jira/browse/HIVE-19370
 Project: Hive
  Issue Type: Bug
Reporter: Amit Chauhan


*Issue:*

while using ADD_Months function on a timestamp datatype column the output omits 
the time part[HH:MM:SS] part from output.

which should not be the case.

*query:* EMAIL_FAILURE_DTMZ is of datatype timestamp in hive.

hive> select CUSTOMER_ID,EMAIL_FAILURE_DTMZ,ADD_MONTHS (EMAIL_FAILURE_DTMZ , 1) 
from TABLE1 where CUSTOMER_ID=125674937;
OK
125674937   2015-12-09 12:25:53 2016-01-09

*hiver version :*

hive> !hive --version;
 Hive 1.2.1000.2.5.6.0-40

 

can you please help if somehow I can get below as output:

 

125674937   2015-12-09 12:25:53   2016-01-09 12:25:53



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66805: HIVE-19311 : Partition and bucketing support for “load data” statement

2018-04-30 Thread Deepak Jaiswal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66805/
---

(Updated May 1, 2018, 4:50 a.m.)


Review request for hive, Ashutosh Chauhan, Eugene Koifman, Jesús Camacho 
Rodríguez, Prasanth_J, and Vineet Garg.


Changes
---

Handle nested subdirs.


Bugs: HIVE-19311
https://issues.apache.org/jira/browse/HIVE-19311


Repository: hive-git


Description
---

Currently, "load data" statement is very limited. It errors out if any of the 
information is missing such as partitioning info if table is partitioned or 
appropriate names when table is bucketed.
It should be able to launch an insert job to load the data instead.


Diffs (updated)
-

  data/files/load_data_job/bucketing.txt PRE-CREATION 
  data/files/load_data_job/load_data_1_partition.txt PRE-CREATION 
  data/files/load_data_job/partitions/load_data_1_partition.txt PRE-CREATION 
  data/files/load_data_job/partitions/load_data_2_partitions.txt PRE-CREATION 
  data/files/load_data_job/partitions/subdir/load_data_1_partition.txt 
PRE-CREATION 
  data/files/load_data_job/partitions/subdir/load_data_2_partitions.txt 
PRE-CREATION 
  itests/src/test/resources/testconfiguration.properties 2ca7b5f63b 
  ql/src/java/org/apache/hadoop/hive/ql/Context.java 0fedf0e76e 
  ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 94dd63641d 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java abd678bb54 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 
c07991d434 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java fad0e5c24a 
  ql/src/java/org/apache/hadoop/hive/ql/parse/UpdateDeleteSemanticAnalyzer.java 
2f3b07f4af 
  ql/src/test/org/apache/hadoop/hive/ql/TestTxnLoadData.java ec8c1507ec 
  ql/src/test/queries/clientnegative/load_part_nospec.q 81517991b2 
  ql/src/test/queries/clientnegative/nopart_load.q 966982fd5c 
  ql/src/test/queries/clientpositive/load_data_using_job.q PRE-CREATION 
  ql/src/test/results/clientnegative/load_part_nospec.q.out bebaf92311 
  ql/src/test/results/clientnegative/nopart_load.q.out 881514640c 
  ql/src/test/results/clientpositive/llap/load_data_using_job.q.out 
PRE-CREATION 


Diff: https://reviews.apache.org/r/66805/diff/6/

Changes: https://reviews.apache.org/r/66805/diff/5-6/


Testing
---

Added a unit test.


Thanks,

Deepak Jaiswal



[jira] [Created] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers

2018-04-30 Thread Gopal V (JIRA)
Gopal V created HIVE-19369:
--

 Summary: Locks: Add new lock implementations for always zero-wait 
readers
 Key: HIVE-19369
 URL: https://issues.apache.org/jira/browse/HIVE-19369
 Project: Hive
  Issue Type: Improvement
Reporter: Gopal V


Hive Locking with Micro-managed and full-ACID tables needs a better locking 
implementation which allows for no-wait readers always.

EXCL_DROP
EXCL_WRITE
SHARED_WRITE
SHARED_READ

Short write-up

EXCL_DROP is a "drop partition" or "drop table" and waits for all others to exit
EXCL_WRITE excludes all writes and will wait for all existing SHARED_WRITE to 
exit.
SHARED_WRITE allows all SHARED_WRITES to go through, but will wait for an 
EXCL_WRITE & EXCL_DROP (waiting so that you can do drop + insert in different 
threads).

SHARED_READ does not wait for any lock - it fails fast for a pending EXCL_DROP, 
because even if there is an EXCL_WRITE or SHARED_WRITE pending, there's no 
semantic reason to wait for them to succeed before going ahead with a 
SHARED_WRITE.

a select * => SHARED_READ
an insert into => SHARED_WRITE
an insert overwrite or MERGE => EXCL_WRITE
a drop table => EXCL_DROP

TODO:

The fate of the compactor needs to be added to this before it is a complete 
description.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19368) Metastore: log a warning with table-name + partition-count when get_partitions returns >10k partitions

2018-04-30 Thread Gopal V (JIRA)
Gopal V created HIVE-19368:
--

 Summary: Metastore: log a warning with table-name + 
partition-count when get_partitions returns >10k partitions
 Key: HIVE-19368
 URL: https://issues.apache.org/jira/browse/HIVE-19368
 Project: Hive
  Issue Type: Improvement
Reporter: Gopal V


Ran into this particular letter from the trenches & would like a normal WARN 
log for it.

https://www.slideshare.net/Hadoop_Summit/hive-at-yahoo-letters-from-the-trenches/24





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66720: HIVE-17657 export/import for MM tables is broken

2018-04-30 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66720/
---

(Updated May 1, 2018, 12:44 a.m.)


Review request for hive and Eugene Koifman.


Repository: hive-git


Description
---

.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java b0ec5abcce 
  ql/src/java/org/apache/hadoop/hive/ql/exec/ExportTask.java aba65918f8 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java b5a7853101 
  ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java ce0757cba2 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
d3c62a2775 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
b850ddc9d0 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java 
820046388a 
  ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/PartitionExport.java 
5844f3d97f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/TableExport.java 
abb2e8874b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/io/FileOperations.java 
866d3513b1 
  ql/src/java/org/apache/hadoop/hive/ql/plan/CopyWork.java c0e4a43d9c 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ExportWork.java 72ce79836c 
  ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands.java 6a3be39ce4 
  ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java 6daac1b789 
  ql/src/test/org/apache/hadoop/hive/ql/TxnCommandsBaseForTests.java a2adb966fe 
  ql/src/test/queries/clientpositive/mm_exim.q c47342bd23 
  ql/src/test/results/clientpositive/llap/mm_exim.q.out 1f40754373 


Diff: https://reviews.apache.org/r/66720/diff/4/

Changes: https://reviews.apache.org/r/66720/diff/3-4/


Testing
---


Thanks,

Sergey Shelukhin



Re: Review Request 66720: HIVE-17657 export/import for MM tables is broken

2018-04-30 Thread Sergey Shelukhin


> On April 28, 2018, 1:52 a.m., Eugene Koifman wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/io/FileOperations.java
> > Lines 117 (patched)
> > 
> >
> > this should include getOriginalFiles() check if table was converted to 
> > MM but not yet compacted (I assume the patch to make this conversion 
> > metadata-only operation is still somewhere in flight)

Will be fixed in the patch that adds original files support that will likely be 
committed after this patch.
Or in this patch if that one goes first.


- Sergey


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66720/#review202100
---


On April 23, 2018, 9:18 p.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66720/
> ---
> 
> (Updated April 23, 2018, 9:18 p.m.)
> 
> 
> Review request for hive and Eugene Koifman.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> .
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java ce683c8a8d 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExportTask.java aba65918f8 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 6395c31ec7 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java 
> ce0757cba2 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
> d3c62a2775 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
> b850ddc9d0 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java 
> 820046388a 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/PartitionExport.java 
> 5844f3d97f 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/TableExport.java 
> abb2e8874b 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/io/FileOperations.java 
> 866d3513b1 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/CopyWork.java c0e4a43d9c 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/ExportWork.java 72ce79836c 
>   ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands.java 12d57c6feb 
>   ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java 0e53697be2 
>   ql/src/test/org/apache/hadoop/hive/ql/TxnCommandsBaseForTests.java 
> a2adb966fe 
>   ql/src/test/queries/clientpositive/mm_exim.q c47342bd23 
>   ql/src/test/results/clientpositive/llap/mm_exim.q.out 1f40754373 
> 
> 
> Diff: https://reviews.apache.org/r/66720/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



[jira] [Created] (HIVE-19367) Load Data should fail for empty Parquet files.

2018-04-30 Thread Deepak Jaiswal (JIRA)
Deepak Jaiswal created HIVE-19367:
-

 Summary: Load Data should fail for empty Parquet files.
 Key: HIVE-19367
 URL: https://issues.apache.org/jira/browse/HIVE-19367
 Project: Hive
  Issue Type: Bug
Reporter: Deepak Jaiswal
Assignee: Deepak Jaiswal


Load data does not validate the input for Parquet tables. This results in query 
failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66645: HIVE-19211: New streaming ingest API and support for dynamic partitioning

2018-04-30 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66645/
---

(Updated April 30, 2018, 11:10 p.m.)


Review request for hive, Ashutosh Chauhan and Eugene Koifman.


Changes
---

Rebased patch to latest master.


Bugs: HIVE-19211
https://issues.apache.org/jira/browse/HIVE-19211


Repository: hive-git


Description
---

HIVE-19211: New streaming ingest API and support for dynamic partitioning


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 6e35653 
  
hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
 90dbdac 
  itests/hive-unit/pom.xml 3ae7f2f 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java
 8ee033d 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveClientCache.java 
PRE-CREATION 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreUtils.java 
a66c135 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java 09f8802 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java 76569d5 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 4661881 
  serde/src/java/org/apache/hadoop/hive/serde2/JsonSerDe.java PRE-CREATION 
  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
 8c159e9 
  streaming/pom.xml b58ec01 
  streaming/src/java/org/apache/hive/streaming/AbstractRecordWriter.java 
25998ae 
  streaming/src/java/org/apache/hive/streaming/ConnectionError.java 668bffb 
  streaming/src/java/org/apache/hive/streaming/ConnectionInfo.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/DelimitedInputWriter.java 
898b3f9 
  streaming/src/java/org/apache/hive/streaming/HeartBeatFailure.java b1f9520 
  streaming/src/java/org/apache/hive/streaming/HiveEndPoint.java b04e137 
  streaming/src/java/org/apache/hive/streaming/HiveStreamingConnection.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/ImpersonationFailed.java 23e17e7 
  streaming/src/java/org/apache/hive/streaming/InvalidColumn.java 0011b14 
  streaming/src/java/org/apache/hive/streaming/InvalidPartition.java f1f9804 
  streaming/src/java/org/apache/hive/streaming/InvalidTable.java ef1c91d 
  streaming/src/java/org/apache/hive/streaming/InvalidTransactionState.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidTrasactionState.java 
762f5f8 
  streaming/src/java/org/apache/hive/streaming/PartitionCreationFailed.java 
5f9aca6 
  streaming/src/java/org/apache/hive/streaming/PartitionHandler.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/PartitionInfo.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/QueryFailedException.java 
ccd3ae0 
  streaming/src/java/org/apache/hive/streaming/RecordWriter.java dc6d70e 
  streaming/src/java/org/apache/hive/streaming/SerializationError.java a57ba00 
  streaming/src/java/org/apache/hive/streaming/StreamingConnection.java 2f760ea 
  streaming/src/java/org/apache/hive/streaming/StreamingException.java a7f84c1 
  streaming/src/java/org/apache/hive/streaming/StreamingIOFailure.java 0dfbfa7 
  streaming/src/java/org/apache/hive/streaming/StrictDelimitedInputWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StrictJsonWriter.java 0077913 
  streaming/src/java/org/apache/hive/streaming/StrictRegexWriter.java c0b7324 
  streaming/src/java/org/apache/hive/streaming/TransactionBatch.java 2b05771 
  streaming/src/java/org/apache/hive/streaming/TransactionBatchUnAvailable.java 
a8c8cd4 
  streaming/src/java/org/apache/hive/streaming/TransactionError.java a331b20 
  streaming/src/test/org/apache/hive/streaming/TestDelimitedInputWriter.java 
f0843a1 
  streaming/src/test/org/apache/hive/streaming/TestStreaming.java 0ec3048 
  
streaming/src/test/org/apache/hive/streaming/TestStreamingDynamicPartitioning.java
 PRE-CREATION 


Diff: https://reviews.apache.org/r/66645/diff/10/

Changes: https://reviews.apache.org/r/66645/diff/9-10/


Testing
---


Thanks,

Prasanth_J



[jira] [Created] (HIVE-19366) Vectorization causing TestStreaming.testStreamBucketingMatchesRegularBucketing to fail

2018-04-30 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-19366:


 Summary: Vectorization causing 
TestStreaming.testStreamBucketingMatchesRegularBucketing to fail
 Key: HIVE-19366
 URL: https://issues.apache.org/jira/browse/HIVE-19366
 Project: Hive
  Issue Type: Sub-task
  Components: Streaming
Affects Versions: 3.0.0, 3.1.0
Reporter: Prasanth Jayachandran


Disabled vectorization for 
TestStreaming#testStreamBucketingMatchesRegularBucketing test case in 
HIVE-19211 as it is giving incorrect results (the issue is mostly related to 
wrong table directory location which returns 0 splits). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19365) Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has different names in different scripts

2018-04-30 Thread Alan Gates (JIRA)
Alan Gates created HIVE-19365:
-

 Summary: Index on COMPLETED_TXN_COMPONENTS in Metastore RDBMS has 
different names in different scripts
 Key: HIVE-19365
 URL: https://issues.apache.org/jira/browse/HIVE-19365
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 3.0.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 3.0.0


In mysql and mssql install scripts the index is called 
COMPLETED_TXN_COMPONENTS_IDX2  Everywhere else it is called 
COMPLETED_TXN_COMPONENTS_IDX, which is breaking upgrade scripts for 3.0 to 3.1 
since they don't know which index to update.  One name should be chosen and 
used everywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19364) autoColumnStats_10.q test results for TestMiniLlapLocalCliDriver with insert-only transactional table does not match those with non-transactional table

2018-04-30 Thread Steve Yeom (JIRA)
Steve Yeom created HIVE-19364:
-

 Summary: autoColumnStats_10.q test results for 
TestMiniLlapLocalCliDriver with insert-only transactional table does not match 
those with non-transactional table
 Key: HIVE-19364
 URL: https://issues.apache.org/jira/browse/HIVE-19364
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 3.0.0
Reporter: Steve Yeom
Assignee: Steve Yeom






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66571: HIVE-19161: Add authorizations to information schema

2018-04-30 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66571/#review202148
---




jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcStorageHandler.java
Lines 146 (patched)


indentation seems off here.



ql/src/java/org/apache/hadoop/hive/ql/metadata/JarUtils.java
Lines 55 (patched)


the accumulo specific reference shold be removed from this class



ql/src/java/org/apache/hadoop/hive/ql/metadata/JarUtils.java
Lines 143 (patched)


how about using java8 style and skip finally block -
try (ZipFile zip = new ZipFile(jar)) { 
  

}


- Thejas Nair


On April 28, 2018, 1:09 a.m., Daniel Dai wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66571/
> ---
> 
> (Updated April 28, 2018, 1:09 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> See HIVE-19161
> 
> 
> Diffs
> -
> 
>   
> accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/HiveAccumuloHelper.java
>  9fccb49 
>   accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java 
> 3a2facf 
>   
> accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/serde/CompositeAccumuloRowIdFactory.java
>  d8b9aa3 
>   
> accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/serde/DefaultAccumuloRowIdFactory.java
>  bae2930 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java f40c606 
>   
> itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
>  8ecbaad 
>   itests/hive-unit/pom.xml 3ae7f2f 
>   itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestRestrictedList.java 
> 79fdb68 
>   
> itests/hive-unit/src/test/java/org/apache/hive/service/server/TestInformationSchemaWithPrivilege.java
>  PRE-CREATION 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcStorageHandler.java
>  df55272 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/DatabaseAccessorFactory.java
>  6d3c8d9 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessor.java
>  772bc5d 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/JdbcRecordIterator.java
>  638e2b0 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/MsSqlDatabaseAccessor.java
>  PRE-CREATION 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/OracleDatabaseAccessor.java
>  PRE-CREATION 
>   
> jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/dao/PostgresDatabaseAccessor.java
>  PRE-CREATION 
>   metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql 339 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java d59bf1f 
>   ql/src/java/org/apache/hadoop/hive/ql/metadata/JarUtils.java PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/security/authorization/HiveAuthorizationProvider.java
>  60d9dc1 
>   
> ql/src/java/org/apache/hadoop/hive/ql/security/authorization/PrivilegeSynchonizer.java
>  PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveResourceACLsImpl.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 60b63d4 
>   
> ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFCurrentGroups.java
>  PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFRestrictInformationSchema.java
>  PRE-CREATION 
>   ql/src/test/results/clientpositive/llap/resourceplan.q.out 9850276 
>   ql/src/test/results/clientpositive/show_functions.q.out 4df555b 
>   service/src/java/org/apache/hive/service/server/HiveServer2.java e373628 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
>  397a081 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
>  1c8d223 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
>  aee416d 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
>  184ecb6 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/RawStore.java
>  2c9f2e5 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
>  92d000b 
>   standalone-metastore/src/main/thrift/hive_metastore.thrift c56a4f9 
>   
> standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
>  defc68f 
>   
> 

[GitHub] hive pull request #338: Get proxy system

2018-04-30 Thread ey1984
Github user ey1984 closed the pull request at:

https://github.com/apache/hive/pull/338


---


[GitHub] hive pull request #338: Get proxy system

2018-04-30 Thread ey1984
GitHub user ey1984 reopened a pull request:

https://github.com/apache/hive/pull/338

Get proxy system

Hello,

I'm using your hive-jdbc (1.2.1) as dependency for 2 applications deployed 
into 2 docker containers (Java Code and Python Code).
For both containers, it runs behind a proxy system when I deploy on 
qualification and production environment.
For java, (-Dhttp.proxyHost=MyProxyHost -Dhttp.proxyPort=MyProxyPort)
For python, I set HTTP_PROXY=http://myproxyhost:myproxyport

And after deployed, and launched, a timeout occurs when I want to hit the 
hive server. I call hive by url jdbc://hive

So after debugging your source code, I added some code (PR as requested) in 
order to get the proxy system (env from os or jvm configuration) and It works 
fine for both containers.

Is this correction acceptable or is there any other solution to hit Hive 
Server by using proxy ?

Python : I use jaydepbeapi and Java : Only 
DriverManager.getConnection("jdbc://hive2)

Thanks a lot


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ey1984/hive HIVE-Proxy-System

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/338.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #338


commit 2900b5c687f7a2e11bf1b56ddc17aa271e557ff2
Author: ey1984 
Date:   2018-04-30T21:31:39Z

Get proxy system




---


[jira] [Created] (HIVE-19362) enable LLAP cache affinity by default

2018-04-30 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-19362:
---

 Summary: enable LLAP cache affinity by default
 Key: HIVE-19362
 URL: https://issues.apache.org/jira/browse/HIVE-19362
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19363) remove cryptic metrics from LLAP IO output

2018-04-30 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-19363:
---

 Summary: remove cryptic metrics from LLAP IO output
 Key: HIVE-19363
 URL: https://issues.apache.org/jira/browse/HIVE-19363
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hive pull request #338: Get proxy system

2018-04-30 Thread ey1984
GitHub user ey1984 opened a pull request:

https://github.com/apache/hive/pull/338

Get proxy system

Hello,

I'm using your hive-jdbc (1.2.1) as dependency for 2 applications deployed 
into 2 docker containers (Java Code and Python Code).
For both containers, it runs behind a proxy system when I deploy on 
qualification and production environment.
For java, (-Dhttp.proxyHost=MyProxyHost -Dhttp.proxyPort=MyProxyPort)
For python, I set HTTP_PROXY=http://myproxyhost:myproxyport

And after deployed, and launched, a timeout occurs when I want to hit the 
hive server. I call hive by url jdbc://hive

So after debugging your source code, I added some code (PR as requested) in 
order to get the proxy system (env from os or jvm configuration) and It works 
fine for both containers.

Is this correction acceptable or is there any other solution to hit Hive 
Server by using proxy ?

Python : I use jaydepbeapi and Java : Only 
DriverManager.getConnection("jdbc://hive2)

Thanks a lot


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ey1984/hive HIVE-Proxy-System

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/338.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #338


commit 2900b5c687f7a2e11bf1b56ddc17aa271e557ff2
Author: ey1984 
Date:   2018-04-30T21:31:39Z

Get proxy system




---


Re: Review Request 66805: HIVE-19311 : Partition and bucketing support for “load data” statement

2018-04-30 Thread Deepak Jaiswal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66805/
---

(Updated April 30, 2018, 8:38 p.m.)


Review request for hive, Ashutosh Chauhan, Eugene Koifman, Jesús Camacho 
Rodríguez, and Vineet Garg.


Changes
---

The result was updated due to use of murmur hash.


Bugs: HIVE-19311
https://issues.apache.org/jira/browse/HIVE-19311


Repository: hive-git


Description
---

Currently, "load data" statement is very limited. It errors out if any of the 
information is missing such as partitioning info if table is partitioned or 
appropriate names when table is bucketed.
It should be able to launch an insert job to load the data instead.


Diffs (updated)
-

  data/files/load_data_job/bucketing.txt PRE-CREATION 
  data/files/load_data_job/load_data_1_partition.txt PRE-CREATION 
  data/files/load_data_job/partitions/load_data_1_partition.txt PRE-CREATION 
  data/files/load_data_job/partitions/load_data_2_partitions.txt PRE-CREATION 
  itests/src/test/resources/testconfiguration.properties 2ca7b5f63b 
  ql/src/java/org/apache/hadoop/hive/ql/Context.java 0fedf0e76e 
  ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 7d33fa3892 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java abd678bb54 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 
c07991d434 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 020565014b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/UpdateDeleteSemanticAnalyzer.java 
2f3b07f4af 
  ql/src/test/org/apache/hadoop/hive/ql/TestTxnLoadData.java ec8c1507ec 
  ql/src/test/queries/clientnegative/load_part_nospec.q 81517991b2 
  ql/src/test/queries/clientnegative/nopart_load.q 966982fd5c 
  ql/src/test/queries/clientpositive/load_data_using_job.q PRE-CREATION 
  ql/src/test/results/clientnegative/load_part_nospec.q.out bebaf92311 
  ql/src/test/results/clientnegative/nopart_load.q.out 881514640c 
  ql/src/test/results/clientpositive/llap/load_data_using_job.q.out 
PRE-CREATION 


Diff: https://reviews.apache.org/r/66805/diff/5/

Changes: https://reviews.apache.org/r/66805/diff/4-5/


Testing
---

Added a unit test.


Thanks,

Deepak Jaiswal



[jira] [Created] (HIVE-19361) Backport HIVE-18910 to branch -3

2018-04-30 Thread Deepak Jaiswal (JIRA)
Deepak Jaiswal created HIVE-19361:
-

 Summary: Backport HIVE-18910 to branch -3
 Key: HIVE-19361
 URL: https://issues.apache.org/jira/browse/HIVE-19361
 Project: Hive
  Issue Type: Bug
Reporter: Deepak Jaiswal
Assignee: Deepak Jaiswal


Please see HIVE-18910



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Should we release storage-api 2.6.0 rc0?

2018-04-30 Thread Owen O'Malley
With 3 +1 votes and no -1's the vote passes. Thanks Jesus and Alan!

.. Owen

On Fri, Apr 27, 2018 at 9:31 AM, Alan Gates  wrote:

> +1 Did a build in a clean mvn repo, ran rat, looked over NOTICE and LICENSE
> files.
>
> On Fri, Apr 27, 2018 at 8:53 AM, Jesus Camacho Rodriguez <
> jcama...@apache.org> wrote:
>
> > +1
> > - compiled from src
> > - ran unit tests
> > - ran rat
> >
> > -Jesús
> >
> >
> >
> > On 4/26/18, 8:30 AM, "Owen O'Malley"  wrote:
> >
> > All,
> >I'd like to make a new release of the storage-api.
> >
> > Artifacts:
> > tag: https://github.com/apache/hive/releases/tag/storage-
> > release-2.6.0-rc0
> > tar ball: http://home.apache.org/~omalley/storage-2.6.0/
> >
> > Thanks,
> >Owen
> >
> >
> >
> >
>


[jira] [Created] (HIVE-19360) CBO: Add an "optimizedSQL" to QueryPlan object

2018-04-30 Thread Gopal V (JIRA)
Gopal V created HIVE-19360:
--

 Summary: CBO: Add an "optimizedSQL" to QueryPlan object 
 Key: HIVE-19360
 URL: https://issues.apache.org/jira/browse/HIVE-19360
 Project: Hive
  Issue Type: Improvement
  Components: CBO, Diagnosability
Affects Versions: 3.1.0
Reporter: Gopal V


Calcite RelNodes can be converted back into SQL (as the new JDBC storage 
handler does), which allows Hive to print out the post CBO plan as a SQL query 
instead of having to guess the join orders from the subsequent Tez plan.

The query generated might not be always valid SQL at this point, but is a world 
ahead of DAG plans in readability.

Eg. tpc-ds Query4 CTEs gets expanded to

{code}
SELECT t16.$f3 customer_preferred_cust_flag
FROM
  (SELECT t0.c_customer_id $f0,
   SUM((t2.ws_ext_list_price - t2.ws_ext_wholesale_cost 
- t2.ws_ext_discount_amt + t2.ws_ext_sales_price) / CAST(2 AS DECIMAL(10, 0))) 
$f8
   FROM
 (SELECT c_customer_sk,
 c_customer_id,
 c_first_name,
 c_last_name,
 c_preferred_cust_flag,
 c_birth_country,
 c_login,
 c_email_address
  FROM default.customer
  WHERE c_customer_sk IS NOT NULL
AND c_customer_id IS NOT NULL) t0
   INNER JOIN (
 (SELECT ws_sold_date_sk,
 ws_bill_customer_sk,
 ws_ext_discount_amt,
 ws_ext_sales_price,
 ws_ext_wholesale_cost,
 ws_ext_list_price
  FROM default.web_sales
  WHERE ws_bill_customer_sk IS NOT NULL
AND ws_sold_date_sk IS NOT NULL) t2
   INNER JOIN
 (SELECT d_date_sk,
 CAST(2002 AS INTEGER) d_year
  FROM default.date_dim
  WHERE d_year = 2002
AND d_date_sk IS NOT NULL) t4 ON t2.ws_sold_date_sk = 
t4.d_date_sk) ON t0.c_customer_sk = t2.ws_bill_customer_sk
   GROUP BY t0.c_customer_id,
t0.c_first_name,
t0.c_last_name,
t0.c_preferred_cust_flag,
t0.c_birth_country,
t0.c_login,
t0.c_email_address) t7
INNER JOIN (
  (SELECT t9.c_customer_id $f0,
   t9.c_preferred_cust_flag $f3,

SUM((t11.ss_ext_list_price - t11.ss_ext_wholesale_cost - 
t11.ss_ext_discount_amt + t11.ss_ext_sales_price) / CAST(2 AS DECIMAL(10, 0))) 
$f8
   FROM
 (SELECT c_customer_sk,
 c_customer_id,
 c_first_name,
 c_last_name,
 c_preferred_cust_flag,
 c_birth_country,
 c_login,
 c_email_address
  FROM default.customer
  WHERE c_customer_sk IS NOT NULL
AND c_customer_id IS NOT NULL) t9
   INNER JOIN (
 (SELECT ss_sold_date_sk,
 ss_customer_sk,
 ss_ext_discount_amt,
 ss_ext_sales_price,
 ss_ext_wholesale_cost,
 ss_ext_list_price
  FROM default.store_sales
  WHERE ss_customer_sk IS NOT NULL
AND ss_sold_date_sk IS NOT NULL) t11
   INNER JOIN
 (SELECT d_date_sk,
 CAST(2002 AS INTEGER) d_year
  FROM default.date_dim
  WHERE d_year = 2002
AND d_date_sk IS NOT NULL) t13 ON 
t11.ss_sold_date_sk = t13.d_date_sk) ON t9.c_customer_sk = t11.ss_customer_sk
   GROUP BY t9.c_customer_id,
t9.c_first_name,
t9.c_last_name,
t9.c_preferred_cust_flag,
t9.c_birth_country,
t9.c_login,
t9.c_email_address) t16
INNER JOIN (
  (SELECT t18.c_customer_id $f0,
SUM((t20.cs_ext_list_price 
- t20.cs_ext_wholesale_cost - t20.cs_ext_discount_amt + t20.cs_ext_sales_price) 
/ CAST(2 AS DECIMAL(10, 0))) $f8
   FROM
 (SELECT c_customer_sk,
 c_customer_id,
 c_first_name,
 c_last_name,