Re: Welcome new Hive committer, Zhihai Xu

2017-05-05 Thread Peter Vary
Congratulations Zhihai!

2017. máj. 5. 18:52 ezt írta ("Xuefu Zhang" ):

> Hi all,
>
> I'm very please to announce that Hive PMC has recently voted to offer
> Zhihai a committership which he accepted. Please join me in congratulating
> on this recognition and thanking him for his contributions to Hive.
>
> Regards,
> Xuefu
>


[jira] [Created] (HIVE-16601) Display Session Id, Query Name / Id, and Dag Id in Spark UI

2017-05-05 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-16601:
---

 Summary: Display Session Id, Query Name / Id, and Dag Id in Spark 
UI
 Key: HIVE-16601
 URL: https://issues.apache.org/jira/browse/HIVE-16601
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Sahil Takiar
Assignee: Sahil Takiar


We should display the session id for each HoS Application Launched, and the 
Query Name / Id and Dag Id for each Spark job launched.

This should help with debuggability of HoS applications. The Hive-on-Tez UI 
does something similar.

Related issues for Hive-on-Tez: HIVE-12357, HIVE-12523



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: How to create a patch that contains a binary file

2017-05-05 Thread Juan Rodríguez Hortalá
Hi Owen,

That worked just fine, and I can now apply the patch with `git apply` and
the ORC file is ok.

Thanks a lot for your help.

Greetings,

Juan


On Fri, May 5, 2017 at 2:55 PM, Owen O'Malley 
wrote:

> Try:
>
> % git format-patch --stdout HEAD^ > HIVE-1234.1.patch
>
> That will generate a git format patch that should preserve the binary file.
>
> .. Owen
>
> On Fri, May 5, 2017 at 2:43 PM, Juan Rodríguez Hortalá <
> juan.rodriguez.hort...@gmail.com> wrote:
>
> > Hi,
> >
> > For HIVE-16539 I created a patch that adds a new ORC file, using `git
> diff
> > --no-prefix` as specified in
> > https://cwiki.apache.org/confluence/display/Hive/HowToContribute#
> > HowToContribute-CreatingaPatch.
> > The corresponding jenkins build
> >  > failed/238_UTBatch_itests__hive-blobstore_2_tests/logs/hive.log>
> > is failing with
> >
> > 2017-05-05T10:00:30,151 ERROR [4dda13e3-e900-4d86-a654-bca8c14720cd
> > main] ql.Driver: FAILED: SemanticException Line 3:23 Invalid path
> > ''../../data/files/part.orc'': No files matching path
> > file:/home/hiveptest/35.188.114.194-hiveptest-1/apache-
> > github-source-source/data/files/part.orc
> > org.apache.hadoop.hive.ql.parse.SemanticException: Line 3:23 Invalid
> > path ''../../data/files/part.orc'': No files matching path
> > file:/home/hiveptest/35.188.114.194-hiveptest-1/apache-
> > github-source-source/data/files/part.orc
> >
> >
> > I think this is because the patch is not creating the ORC file
> > correctly when it is applied. When I apply the patch locally on an
> > updated clone of https://github.com/apache/hive.git in master, the
> > patches applies ok but the resulting file data/files/part.orc is
> > different from the original file I used to build the patch, and when I
> > try to load it into a table in a local hive instance I get "FAILED:
> > SemanticException Unable to load data to destination table. Error: The
> > file that you are trying to load does not match the file format of the
> > destination table". Similarly, `hive --service orcfiledump
> > data/files/part.orc` fails with "Exception in thread "main"
> > java.lang.IndexOutOfBoundsException".
> >
> > So it looks like the patch is malformed for the ORC file because it is
> > binary. Should I use bsdiff to build the patch instead? What is the
> > expected way for building patches involving binary files?
> >
> >
> > Thanks,
> >
> >
> > Juan
> >
>


Re: How to create a patch that contains a binary file

2017-05-05 Thread Owen O'Malley
Try:

% git format-patch --stdout HEAD^ > HIVE-1234.1.patch

That will generate a git format patch that should preserve the binary file.

.. Owen

On Fri, May 5, 2017 at 2:43 PM, Juan Rodríguez Hortalá <
juan.rodriguez.hort...@gmail.com> wrote:

> Hi,
>
> For HIVE-16539 I created a patch that adds a new ORC file, using `git diff
> --no-prefix` as specified in
> https://cwiki.apache.org/confluence/display/Hive/HowToContribute#
> HowToContribute-CreatingaPatch.
> The corresponding jenkins build
>  failed/238_UTBatch_itests__hive-blobstore_2_tests/logs/hive.log>
> is failing with
>
> 2017-05-05T10:00:30,151 ERROR [4dda13e3-e900-4d86-a654-bca8c14720cd
> main] ql.Driver: FAILED: SemanticException Line 3:23 Invalid path
> ''../../data/files/part.orc'': No files matching path
> file:/home/hiveptest/35.188.114.194-hiveptest-1/apache-
> github-source-source/data/files/part.orc
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 3:23 Invalid
> path ''../../data/files/part.orc'': No files matching path
> file:/home/hiveptest/35.188.114.194-hiveptest-1/apache-
> github-source-source/data/files/part.orc
>
>
> I think this is because the patch is not creating the ORC file
> correctly when it is applied. When I apply the patch locally on an
> updated clone of https://github.com/apache/hive.git in master, the
> patches applies ok but the resulting file data/files/part.orc is
> different from the original file I used to build the patch, and when I
> try to load it into a table in a local hive instance I get "FAILED:
> SemanticException Unable to load data to destination table. Error: The
> file that you are trying to load does not match the file format of the
> destination table". Similarly, `hive --service orcfiledump
> data/files/part.orc` fails with "Exception in thread "main"
> java.lang.IndexOutOfBoundsException".
>
> So it looks like the patch is malformed for the ORC file because it is
> binary. Should I use bsdiff to build the patch instead? What is the
> expected way for building patches involving binary files?
>
>
> Thanks,
>
>
> Juan
>


How to create a patch that contains a binary file

2017-05-05 Thread Juan Rodríguez Hortalá
Hi,

For HIVE-16539 I created a patch that adds a new ORC file, using `git diff
--no-prefix` as specified in
https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CreatingaPatch.
The corresponding jenkins build

is failing with

2017-05-05T10:00:30,151 ERROR [4dda13e3-e900-4d86-a654-bca8c14720cd
main] ql.Driver: FAILED: SemanticException Line 3:23 Invalid path
''../../data/files/part.orc'': No files matching path
file:/home/hiveptest/35.188.114.194-hiveptest-1/apache-github-source-source/data/files/part.orc
org.apache.hadoop.hive.ql.parse.SemanticException: Line 3:23 Invalid
path ''../../data/files/part.orc'': No files matching path
file:/home/hiveptest/35.188.114.194-hiveptest-1/apache-github-source-source/data/files/part.orc


I think this is because the patch is not creating the ORC file
correctly when it is applied. When I apply the patch locally on an
updated clone of https://github.com/apache/hive.git in master, the
patches applies ok but the resulting file data/files/part.orc is
different from the original file I used to build the patch, and when I
try to load it into a table in a local hive instance I get "FAILED:
SemanticException Unable to load data to destination table. Error: The
file that you are trying to load does not match the file format of the
destination table". Similarly, `hive --service orcfiledump
data/files/part.orc` fails with "Exception in thread "main"
java.lang.IndexOutOfBoundsException".

So it looks like the patch is malformed for the ORC file because it is
binary. Should I use bsdiff to build the patch instead? What is the
expected way for building patches involving binary files?


Thanks,


Juan


[jira] [Created] (HIVE-16600) Refactor SetSparkReducerParallelism#needSetParallelism to enable parallel order by in multi_insert cases

2017-05-05 Thread liyunzhang_intel (JIRA)
liyunzhang_intel created HIVE-16600:
---

 Summary: Refactor SetSparkReducerParallelism#needSetParallelism to 
enable parallel order by in multi_insert cases
 Key: HIVE-16600
 URL: https://issues.apache.org/jira/browse/HIVE-16600
 Project: Hive
  Issue Type: Sub-task
Reporter: liyunzhang_intel


in multi_insert cases multi_insert_gby2.q, the parallelism of SORT operator is 
1 even we set "hive.optimize.sampling.orderby" = true.  This is because the 
logic of SetSparkReducerParallelism#needSetParallelism does not support this 
case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16599) NPE in runtime filtering cost when handling SMB Joins

2017-05-05 Thread Deepak Jaiswal (JIRA)
Deepak Jaiswal created HIVE-16599:
-

 Summary: NPE in runtime filtering cost when handling SMB Joins
 Key: HIVE-16599
 URL: https://issues.apache.org/jira/browse/HIVE-16599
 Project: Hive
  Issue Type: Bug
Reporter: Deepak Jaiswal
Assignee: Deepak Jaiswal


A test with SMB joins failed with NPE in runtime filtering costing logic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16598) LlapServiceDriver - create directories and warn of errors

2017-05-05 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-16598:
---

 Summary: LlapServiceDriver - create directories and warn of errors
 Key: HIVE-16598
 URL: https://issues.apache.org/jira/browse/HIVE-16598
 Project: Hive
  Issue Type: Bug
Reporter: Kavan Suresh
Assignee: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16597) Replace use of Map for partSpec with List>

2017-05-05 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-16597:


 Summary: Replace use of Map for partSpec with 
List>
 Key: HIVE-16597
 URL: https://issues.apache.org/jira/browse/HIVE-16597
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair


As discussed in [HIVE-13652 comment 
|https://issues.apache.org/jira/browse/HIVE-13652?focusedCommentId=15998857&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15998857]
 the use of Map for partSpec in AddPartitionDesc makes it 
vulnerable to similar mistakes like what happened with issue in HIVE-13652.

We should cleanup the code to use List> .





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: pre-commit jenkins issues

2017-05-05 Thread Sushanth Sowmyan
Thanks! It looks like it's chugging away now. :)

On May 5, 2017 08:22, "Sergio Pena"  wrote:

> I restarted hiveptest and seems is working now. There was a hiccup on the
> server while using the libraries to create the slave nodes.
>
> On Fri, May 5, 2017 at 12:05 AM, Sushanth Sowmyan 
> wrote:
>
> > Hi,
> >
> > It looks like the precommit queue is currently having issues :
> > https://builds.apache.org/job/PreCommit-HIVE-Build/
> >
> > See builds# 5041,5042,5043 - It looks like it takes about 8 hours
> > waiting for the tests to finish running and to report back, and kills
> > it as it exceeds a 500minute time out, and returns without results. Is
> > anyone able to look into this to see what is going on?
> >
> > Thanks!
> > -Sush
> >
>


[jira] [Created] (HIVE-16596) CrossProductCheck failed to detect cross product between two unions

2017-05-05 Thread Zhiyuan Yang (JIRA)
Zhiyuan Yang created HIVE-16596:
---

 Summary: CrossProductCheck failed to detect cross product between 
two unions
 Key: HIVE-16596
 URL: https://issues.apache.org/jira/browse/HIVE-16596
 Project: Hive
  Issue Type: Bug
Reporter: Zhiyuan Yang
Assignee: Zhiyuan Yang


To reproduce:
{code}
create table f (a int, b string);
set hive.auto.convert.join=false;
explain select * from (select * from f union all select * from f) a join 
(select * from f union all select * from f) b;
{code}

No cross product warning is given.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 58934: HIVE-16568: Support complex types in external LLAP InputFormat

2017-05-05 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/58934/#review174049
---


Ship it!




Ship It!

- Prasanth_J


On May 5, 2017, 10:30 a.m., Jason Dere wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/58934/
> ---
> 
> (Updated May 5, 2017, 10:30 a.m.)
> 
> 
> Review request for hive, Gunther Hagleitner, Prasanth_J, and Siddharth Seth.
> 
> 
> Bugs: HIVE-16568
> https://issues.apache.org/jira/browse/HIVE-16568
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> - Support list/map/struct types in the LLAPRowInputFormat Schema/TypeDesc
> - Support list/map/struct types in the LLAPRowInputFormat Row. Changes in the 
> Row getters/setters needed (no longer using Writable).
> 
> 
> Diffs
> -
> 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
>  654e92b 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java 
> de47412 
>   llap-client/src/java/org/apache/hadoop/hive/llap/LlapRowRecordReader.java 
> ee92f3e 
>   llap-common/src/java/org/apache/hadoop/hive/llap/FieldDesc.java 9621978 
>   llap-common/src/java/org/apache/hadoop/hive/llap/Row.java a84fadc 
>   llap-common/src/java/org/apache/hadoop/hive/llap/TypeDesc.java dda5928 
>   llap-common/src/test/org/apache/hadoop/hive/llap/TestRow.java d4e68f4 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java 
> 9ddbd7e 
>   ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
> b003eb8 
> 
> 
> Diff: https://reviews.apache.org/r/58934/diff/3/
> 
> 
> Testing
> ---
> 
> Added test to TestJdbcWithMiniLlap
> 
> 
> Thanks,
> 
> Jason Dere
> 
>



Re: Welcome new Hive committer, Zhihai Xu

2017-05-05 Thread Vihang Karajgaonkar
Congratulations Zhihai!

On Fri, May 5, 2017 at 10:19 AM, Jimmy Xiang  wrote:

> Congrats!!
>
> On Fri, May 5, 2017 at 10:15 AM, Chinna Rao Lalam
>  wrote:
> > Congratulations Zhihai...
> >
> > On Fri, May 5, 2017 at 10:22 PM, Xuefu Zhang  wrote:
> >>
> >> Hi all,
> >>
> >> I'm very please to announce that Hive PMC has recently voted to offer
> >> Zhihai a committership which he accepted. Please join me in
> congratulating
> >> on this recognition and thanking him for his contributions to Hive.
> >>
> >> Regards,
> >> Xuefu
> >
> >
> >
> >
> > --
> > Hope It Helps,
> > Chinna
>


Re: Review Request 58936: HIVE-16143 : Improve msck repair batching

2017-05-05 Thread Vihang Karajgaonkar


> On May 5, 2017, 12:54 a.m., Sahil Takiar wrote:
> > common/src/java/org/apache/hive/common/util/RetryUtilities.java
> > Lines 25 (patched)
> > 
> >
> > Might want to looking https://github.com/rholder/guava-retrying
> 
> Sahil Takiar wrote:
> look into*

Thanks for the pointer. Took a quick look. It has some interesting ideas but it 
doesn't seem to support reducing the workload size exponentially. It has a 
exponential backoff retry interval, but not the in terms of workload sizing. 
Also, I have never used this library before. Is it popular and production 
ready? The last commit on this library was 2 years ago.


> On May 5, 2017, 12:54 a.m., Sahil Takiar wrote:
> > itests/hive-blobstore/src/test/queries/clientpositive/create_like.q
> > Lines 24 (patched)
> > 
> >
> > is this necessary?

I think this is a better way to the added partitions in the q.out files. The 
"Repair: Added partition to metastore..." is added to the output based on the 
order of iteration over a HashSet which is not very reliable and prone to 
flakiness (across different Java distributions and different versions of same 
Java).


> On May 5, 2017, 12:54 a.m., Sahil Takiar wrote:
> > itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java
> > Lines 1697 (patched)
> > 
> >
> > Why does this need to be masked?

Same as above. We should not really rely on comparing this string in the q.out 
file since the order can change leading to flakiness.


- Vihang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/58936/#review173989
---


On May 2, 2017, 10:25 p.m., Vihang Karajgaonkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/58936/
> ---
> 
> (Updated May 2, 2017, 10:25 p.m.)
> 
> 
> Review request for hive, Sergio Pena and Sahil Takiar.
> 
> 
> Bugs: HIVE-16143
> https://issues.apache.org/jira/browse/HIVE-16143
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-16143 : Improve msck repair batching
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hive/common/util/RetryUtilities.java 
> PRE-CREATION 
>   common/src/test/org/apache/hive/common/util/TestRetryUtilities.java 
> PRE-CREATION 
>   itests/hive-blobstore/src/test/queries/clientpositive/create_like.q 
> 38f384e4c547d3c93d510b89fccfbc2b8e2cba09 
>   itests/hive-blobstore/src/test/results/clientpositive/create_like.q.out 
> 0d362a716291637404a3859fe81068594d82c9e0 
>   itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 
> 2ae1eacb68cef6990ae3f2050af0bed7c8e9843f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
> 917e565f28b2c9aaea18033ea3b6b20fa41fcd0a 
>   
> ql/src/test/org/apache/hadoop/hive/ql/exec/TestMsckCreatePartitionsInBatches.java
>  PRE-CREATION 
>   ql/src/test/queries/clientpositive/msck_repair_0.q 
> 22542331621ca4ce5277c2f46a4264b7540a4d1e 
>   ql/src/test/queries/clientpositive/msck_repair_1.q 
> ea596cbbd2d4c230f2b5afbe379fc1e8836b6fbd 
>   ql/src/test/queries/clientpositive/msck_repair_2.q 
> d8338211e970ebac68a7471ee0960ccf2d51cba3 
>   ql/src/test/queries/clientpositive/msck_repair_3.q 
> fdefca121a2de361dbd19e7ef34fb220e1733ed2 
>   ql/src/test/queries/clientpositive/msck_repair_batchsize.q 
> e56e97ac36a6544f3e20478fdb0e8fa783a857ef 
>   ql/src/test/results/clientpositive/msck_repair_0.q.out 
> 2e0d9dc423071ebbd9a55606f196cf7752e27b1a 
>   ql/src/test/results/clientpositive/msck_repair_1.q.out 
> 3f2fe75b194f1248bd5c073dd7db6b71b2ffc2ba 
>   ql/src/test/results/clientpositive/msck_repair_2.q.out 
> 3f2fe75b194f1248bd5c073dd7db6b71b2ffc2ba 
>   ql/src/test/results/clientpositive/msck_repair_3.q.out 
> 3f2fe75b194f1248bd5c073dd7db6b71b2ffc2ba 
>   ql/src/test/results/clientpositive/msck_repair_batchsize.q.out 
> ba99024163a1f2c59d59e9ed7ea276c154c99d24 
>   ql/src/test/results/clientpositive/repair.q.out 
> c1834640a35500c521a904a115a718c94546df10 
> 
> 
> Diff: https://reviews.apache.org/r/58936/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Vihang Karajgaonkar
> 
>



Re: Welcome new Hive committer, Zhihai Xu

2017-05-05 Thread Jimmy Xiang
Congrats!!

On Fri, May 5, 2017 at 10:15 AM, Chinna Rao Lalam
 wrote:
> Congratulations Zhihai...
>
> On Fri, May 5, 2017 at 10:22 PM, Xuefu Zhang  wrote:
>>
>> Hi all,
>>
>> I'm very please to announce that Hive PMC has recently voted to offer
>> Zhihai a committership which he accepted. Please join me in congratulating
>> on this recognition and thanking him for his contributions to Hive.
>>
>> Regards,
>> Xuefu
>
>
>
>
> --
> Hope It Helps,
> Chinna


Re: Welcome new Hive committer, Zhihai Xu

2017-05-05 Thread Chinna Rao Lalam
Congratulations Zhihai...

On Fri, May 5, 2017 at 10:22 PM, Xuefu Zhang  wrote:

> Hi all,
>
> I'm very please to announce that Hive PMC has recently voted to offer
> Zhihai a committership which he accepted. Please join me in congratulating
> on this recognition and thanking him for his contributions to Hive.
>
> Regards,
> Xuefu
>



-- 
Hope It Helps,
Chinna


Welcome new Hive committer, Zhihai Xu

2017-05-05 Thread Xuefu Zhang
Hi all,

I'm very please to announce that Hive PMC has recently voted to offer
Zhihai a committership which he accepted. Please join me in congratulating
on this recognition and thanking him for his contributions to Hive.

Regards,
Xuefu


Re: Review Request 59020: Support Parquet through HCatalog

2017-05-05 Thread Aihua Xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/59020/#review174039
---




hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FileRecordWriterContainer.java
Lines 30 (patched)


This change looks good to me. 

Sergio, can you also help review the patch since you are more faimilar with 
Parquet?


- Aihua Xu


On May 5, 2017, 8:11 a.m., Adam Szita wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/59020/
> ---
> 
> (Updated May 5, 2017, 8:11 a.m.)
> 
> 
> Review request for hive, Aihua Xu and Sergio Pena.
> 
> 
> Bugs: HIVE-8838
> https://issues.apache.org/jira/browse/HIVE-8838
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Adding support for HCatalog to write tables stored in Parquet format
> 
> 
> Diffs
> -
> 
>   
> hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FileRecordWriterContainer.java
>  b2abc5fbb3670893415354552239d67d072459ed 
>   
> hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/SpecialCases.java
>  60af5c0bf397273fb820f0ee31e578745dbc200f 
>   
> hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderComplexSchema.java
>  4c686fec596d39d41d458bc3ea2753877bd9df98 
>   
> hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderEncryption.java
>  ad11eab1b7e67541b56e90e4a85ba37b41a4db92 
>   
> hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatStorerMulti.java
>  918332ddfda58306707d326f8668b2c223110a29 
>   
> hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestParquetHCatLoader.java
>  6cd382145b55d6b85fc3366faeaba2aaef65ab04 
>   
> hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestParquetHCatStorer.java
>  6dfdc04954dd0b110b1a7194e69468b5dc2f842e 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java
>  a7bb5eedbb99f3cea4601b9fce9a0ad3461567d0 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
> b339cc4347eea143dca2f6d98f9aaafdc427 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
>  71a78cf040667bf14b6c720373e4acd102da19f4 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/ParquetRecordWriterWrapper.java
>  c021dafa480e65d7c0c19a5a85988464112468cb 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetOutputFormat.java
>  ec85b5df0f95cbd45b87259346ae9c1e5aa604a4 
> 
> 
> Diff: https://reviews.apache.org/r/59020/diff/1/
> 
> 
> Testing
> ---
> 
> Tested on cluster, and re-enabled previously disabled tests in HCatalog (for 
> Parquet) that were failing (this adds ~40 tests to be run)
> 
> 
> Thanks,
> 
> Adam Szita
> 
>



Re: pre-commit jenkins issues

2017-05-05 Thread Sergio Pena
I restarted hiveptest and seems is working now. There was a hiccup on the
server while using the libraries to create the slave nodes.

On Fri, May 5, 2017 at 12:05 AM, Sushanth Sowmyan 
wrote:

> Hi,
>
> It looks like the precommit queue is currently having issues :
> https://builds.apache.org/job/PreCommit-HIVE-Build/
>
> See builds# 5041,5042,5043 - It looks like it takes about 8 hours
> waiting for the tests to finish running and to report back, and kills
> it as it exceeds a 500minute time out, and returns without results. Is
> anyone able to look into this to see what is going on?
>
> Thanks!
> -Sush
>


Review Request 59025: HIVE-15834 Add unit tests for org.json usage on master

2017-05-05 Thread daniel voros

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/59025/
---

Review request for hive.


Bugs: HIVE-15834
https://issues.apache.org/jira/browse/HIVE-15834


Repository: hive-git


Description
---

This adds test to all non-trivial usages of the org.json library.

This is the port of HIVE-15833 with a few additions to cover new paths.


Diffs
-

  common/src/java/org/apache/hadoop/hive/common/jsonexplain/Op.java 
03c59813202a8bb1cea98742596aebc6692ef88d 
  common/src/test/org/apache/hadoop/hive/common/jsonexplain/TestOp.java 
PRE-CREATION 
  common/src/test/org/apache/hadoop/hive/common/jsonexplain/TestStage.java 
PRE-CREATION 
  common/src/test/org/apache/hadoop/hive/common/jsonexplain/TestVertex.java 
PRE-CREATION 
  
common/src/test/org/apache/hadoop/hive/common/jsonexplain/tez/TestTezJsonParser.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 
4c24ab4cde9ced05607be7c569a1591cb2eea3e1 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 
f44661ebde6b47a936986c1038b0fa266a9f1d3a 
  ql/src/test/org/apache/hadoop/hive/ql/exec/TestExplainTask.java 
805bc5b45dea304ba52147fc88e6341659415dbd 
  ql/src/test/org/apache/hadoop/hive/ql/hooks/TestATSHook.java PRE-CREATION 


Diff: https://reviews.apache.org/r/59025/diff/1/


Testing
---


Thanks,

daniel voros



[jira] [Created] (HIVE-16595) fix syntax in Hplsql.g4

2017-05-05 Thread Yishuang Lu (JIRA)
Yishuang Lu created HIVE-16595:
--

 Summary: fix syntax in Hplsql.g4
 Key: HIVE-16595
 URL: https://issues.apache.org/jira/browse/HIVE-16595
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Reporter: Yishuang Lu
Assignee: Yishuang Lu
 Fix For: 1.2.3


According to https://github.com/antlr/antlr4/issues/118, incorrect error 
message might return if the start rule does not contain an explicit EOF 
transition. It is better to add EOF for the first rule in grammar.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-16594) Add more tests for BeeLineDriver

2017-05-05 Thread Peter Vary (JIRA)
Peter Vary created HIVE-16594:
-

 Summary: Add more tests for BeeLineDriver
 Key: HIVE-16594
 URL: https://issues.apache.org/jira/browse/HIVE-16594
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Reporter: Peter Vary
Assignee: Peter Vary


We have the general infrastructure to run the BeeLine tests and produce the 
same results as the CliDriver tests.

Add some more tests to the BeeLineDriver, and iron out the remaining 
differences between the two infrastructure.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 58934: HIVE-16568: Support complex types in external LLAP InputFormat

2017-05-05 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/58934/
---

(Updated May 5, 2017, 10:30 a.m.)


Review request for hive, Gunther Hagleitner, Prasanth_J, and Siddharth Seth.


Changes
---

Check for negative list/map size per Prasanth Jayachandran's comments.


Bugs: HIVE-16568
https://issues.apache.org/jira/browse/HIVE-16568


Repository: hive-git


Description
---

- Support list/map/struct types in the LLAPRowInputFormat Schema/TypeDesc
- Support list/map/struct types in the LLAPRowInputFormat Row. Changes in the 
Row getters/setters needed (no longer using Writable).


Diffs (updated)
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
 654e92b 
  itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java 
de47412 
  llap-client/src/java/org/apache/hadoop/hive/llap/LlapRowRecordReader.java 
ee92f3e 
  llap-common/src/java/org/apache/hadoop/hive/llap/FieldDesc.java 9621978 
  llap-common/src/java/org/apache/hadoop/hive/llap/Row.java a84fadc 
  llap-common/src/java/org/apache/hadoop/hive/llap/TypeDesc.java dda5928 
  llap-common/src/test/org/apache/hadoop/hive/llap/TestRow.java d4e68f4 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java 
9ddbd7e 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
b003eb8 


Diff: https://reviews.apache.org/r/58934/diff/3/

Changes: https://reviews.apache.org/r/58934/diff/2-3/


Testing
---

Added test to TestJdbcWithMiniLlap


Thanks,

Jason Dere



[jira] [Created] (HIVE-16593) SparkClientFactory.stop may prevent JVM from exiting

2017-05-05 Thread Rui Li (JIRA)
Rui Li created HIVE-16593:
-

 Summary: SparkClientFactory.stop may prevent JVM from exiting
 Key: HIVE-16593
 URL: https://issues.apache.org/jira/browse/HIVE-16593
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Review Request 59020: Support Parquet through HCatalog

2017-05-05 Thread Adam Szita

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/59020/
---

Review request for hive, Aihua Xu and Sergio Pena.


Bugs: HIVE-8838
https://issues.apache.org/jira/browse/HIVE-8838


Repository: hive-git


Description
---

Adding support for HCatalog to write tables stored in Parquet format


Diffs
-

  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FileRecordWriterContainer.java
 b2abc5fbb3670893415354552239d67d072459ed 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/SpecialCases.java
 60af5c0bf397273fb820f0ee31e578745dbc200f 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderComplexSchema.java
 4c686fec596d39d41d458bc3ea2753877bd9df98 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderEncryption.java
 ad11eab1b7e67541b56e90e4a85ba37b41a4db92 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatStorerMulti.java
 918332ddfda58306707d326f8668b2c223110a29 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestParquetHCatLoader.java
 6cd382145b55d6b85fc3366faeaba2aaef65ab04 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestParquetHCatStorer.java
 6dfdc04954dd0b110b1a7194e69468b5dc2f842e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetOutputFormat.java 
a7bb5eedbb99f3cea4601b9fce9a0ad3461567d0 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetTableUtils.java 
b339cc4347eea143dca2f6d98f9aaafdc427 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
 71a78cf040667bf14b6c720373e4acd102da19f4 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/ParquetRecordWriterWrapper.java
 c021dafa480e65d7c0c19a5a85988464112468cb 
  
ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetOutputFormat.java
 ec85b5df0f95cbd45b87259346ae9c1e5aa604a4 


Diff: https://reviews.apache.org/r/59020/diff/1/


Testing
---

Tested on cluster, and re-enabled previously disabled tests in HCatalog (for 
Parquet) that were failing (this adds ~40 tests to be run)


Thanks,

Adam Szita



Re: Review Request 58934: HIVE-16568: Support complex types in external LLAP InputFormat

2017-05-05 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/58934/#review174008
---




llap-client/src/java/org/apache/hadoop/hive/llap/LlapRowRecordReader.java
Lines 168 (patched)


if listSize <0 set the convertedVal to null?



llap-client/src/java/org/apache/hadoop/hive/llap/LlapRowRecordReader.java
Lines 181 (patched)


if mapSize is <0 then getMap() will return null. Potential NPE here.


- Prasanth_J


On May 5, 2017, 2:49 a.m., Jason Dere wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/58934/
> ---
> 
> (Updated May 5, 2017, 2:49 a.m.)
> 
> 
> Review request for hive, Gunther Hagleitner, Prasanth_J, and Siddharth Seth.
> 
> 
> Bugs: HIVE-16568
> https://issues.apache.org/jira/browse/HIVE-16568
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> - Support list/map/struct types in the LLAPRowInputFormat Schema/TypeDesc
> - Support list/map/struct types in the LLAPRowInputFormat Row. Changes in the 
> Row getters/setters needed (no longer using Writable).
> 
> 
> Diffs
> -
> 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
>  654e92b 
>   
> itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java 
> de47412 
>   llap-client/src/java/org/apache/hadoop/hive/llap/LlapRowRecordReader.java 
> ee92f3e 
>   llap-common/src/java/org/apache/hadoop/hive/llap/FieldDesc.java 9621978 
>   llap-common/src/java/org/apache/hadoop/hive/llap/Row.java a84fadc 
>   llap-common/src/java/org/apache/hadoop/hive/llap/TypeDesc.java dda5928 
>   llap-common/src/test/org/apache/hadoop/hive/llap/TestRow.java d4e68f4 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java 
> 9ddbd7e 
>   ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
> b003eb8 
> 
> 
> Diff: https://reviews.apache.org/r/58934/diff/2/
> 
> 
> Testing
> ---
> 
> Added test to TestJdbcWithMiniLlap
> 
> 
> Thanks,
> 
> Jason Dere
> 
>



[jira] [Created] (HIVE-16592) Vectorization: Long hashes use hash64shift and not hash6432shift to generate int hashCodes

2017-05-05 Thread Gopal V (JIRA)
Gopal V created HIVE-16592:
--

 Summary: Vectorization: Long hashes use hash64shift and not 
hash6432shift to generate int hashCodes
 Key: HIVE-16592
 URL: https://issues.apache.org/jira/browse/HIVE-16592
 Project: Hive
  Issue Type: Bug
Reporter: Gopal V






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)