[jira] [Commented] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408918#comment-15408918
 ] 

Hive QA commented on HIVE-14342:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822205/HIVE-14342.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10440 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/781/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/781/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-781/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12822205 - PreCommit-HIVE-MASTER-Build

> Beeline output is garbled when executed from a remote shell
> ---
>
> Key: HIVE-14342
> URL: https://issues.apache.org/jira/browse/HIVE-14342
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14342.patch, HIVE-14342.patch
>
>
> {code}
> use default;
> create table clitest (key int, name String, value String);
> insert into table clitest values 
> (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0");
> {code}
> then run a select query
> {code} 
> # cat /tmp/select.sql 
> set hive.execution.engine=mr;
> select key,name,value 
> from clitest 
> where value="1" limit 1;
> {code}
> Then run beeline via a remote shell, for example
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/select.sql" 
> root@'s password: 
> 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> nullkey,name,value 
> 1,TRUE,1
> null   
> $
> {code}
> In older releases that the output is as follows
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/run.sql" 
> Are you sure you want to continue connecting (yes/no)? yes
> root@'s password: 
> 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> key,name,value
> 1,TRUE,1
> $
> {code}
> The output contains nulls instead of blank lines. This is due to the use of 
> -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able 
> to run beeline as a background process. But this is the unfortunate side 
> effect of that fix.
> Running beeline in background also produces garbled output.
> {code}
> # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true 
> --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > 
> /tmp/beeline.txt &
> # cat /tmp/beeline.txt 
> null1,TRUE,1   
> #
> {code}
> So I think the use of jline.UnsupportedTerminal should be documented but not 
> used automatically by beeline under the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14402) Vectorization: Fix Mapjoin overflow deserialization

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408819#comment-15408819
 ] 

Hive QA commented on HIVE-14402:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822192/HIVE-14402.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10440 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/779/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/779/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-779/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12822192 - PreCommit-HIVE-MASTER-Build

> Vectorization: Fix Mapjoin overflow deserialization 
> 
>
> Key: HIVE-14402
> URL: https://issues.apache.org/jira/browse/HIVE-14402
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14402.1.patch, HIVE-14402.2.patch
>
>
> This is in a codepath currently disabled in master, however enabling it 
> triggers OOB.
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
> at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setRef(BytesColumnVector.java:92)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeRowColumn(VectorDeserializeRow.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserialize(VectorDeserializeRow.java:674)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultLargeMultiValue(VectorMapJoinGenerateResultOperator.java:307)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:226)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultRepeatedAll(VectorMapJoinGenerateResultOperator.java:391)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14029) Update Spark version to 2.0.0

2016-08-04 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408774#comment-15408774
 ] 

Rui Li commented on HIVE-14029:
---

Yeah it'd be great if we can get rid of that tar, or at least make it smaller - 
we currently package the example jar into it which shouldn't be necessary.

> Update Spark version to 2.0.0
> -
>
> Key: HIVE-14029
> URL: https://issues.apache.org/jira/browse/HIVE-14029
> Project: Hive
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
>
> There are quite some new optimizations in Spark 2.0.0. We need to bump up 
> Spark to 2.0.0 to benefit those performance improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14412) Add a timezone-aware timestamp

2016-08-04 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-14412:
--
Attachment: HIVE-14412.1.patch

Not sure why test doesn't run, try again.

> Add a timezone-aware timestamp
> --
>
> Key: HIVE-14412
> URL: https://issues.apache.org/jira/browse/HIVE-14412
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-14412.1.patch, HIVE-14412.1.patch
>
>
> Java's Timestamp stores the time elapsed since the epoch. While it's by 
> itself unambiguous, ambiguity comes when we parse a string into timestamp, or 
> convert a timestamp to string, causing problems like HIVE-14305.
> To solve the issue, I think we should make timestamp aware of timezone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14411:

   Resolution: Fixed
 Assignee: Niklaus Xiao  (was: Ashutosh Chauhan)
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Niklaus!

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Niklaus Xiao
> Fix For: 2.2.0
>
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-14420) Fix orc_llap_counters.q test failure in master

2016-08-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reopened HIVE-14420:
-

Failed again in latest run: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/778/testReport/

> Fix orc_llap_counters.q test failure in master
> --
>
> Key: HIVE-14420
> URL: https://issues.apache.org/jira/browse/HIVE-14420
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408759#comment-15408759
 ] 

Ashutosh Chauhan commented on HIVE-14411:
-

+1

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Niklaus Xiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408758#comment-15408758
 ] 

Niklaus Xiao commented on HIVE-14411:
-

Test failures not related.

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14415) Upgrade qtest execution framework to junit4 - TestPerfCliDriver

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408740#comment-15408740
 ] 

Hive QA commented on HIVE-14415:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822174/HIVE-14415.2.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10439 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/778/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/778/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-778/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12822174 - PreCommit-HIVE-MASTER-Build

> Upgrade qtest execution framework to junit4 - TestPerfCliDriver
> ---
>
> Key: HIVE-14415
> URL: https://issues.apache.org/jira/browse/HIVE-14415
> Project: Hive
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14415.1.patch, HIVE-14415.2.patch
>
>
> I would like to upgrade the current maven+ant+velocimacro+junit4 qtest 
> generation framework to use only junit4 - while (trying) to keep 
> all the existing features it provides.
> What I can't really do with the current one: execute easily a single qtests 
> from an IDE (as a matter of fact I can...but it's way too complicated; after 
> this it won't be a cake-walk either...but it will be a step closer ;)
> I think this change will make it more clear how these tests are configured 
> and executed.
> I will do this in two phases, currently i will only change 
> {{TestPerfCliDriver}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-11555) Beeline sends password in clear text if we miss -ssl=true flag in the connect string

2016-08-04 Thread Junjie Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HIVE-11555:
---
Comment: was deleted

(was: It should be simple if the ssl option set to true by defualt.)

> Beeline sends password in clear text if we miss -ssl=true flag in the connect 
> string
> 
>
> Key: HIVE-11555
> URL: https://issues.apache.org/jira/browse/HIVE-11555
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.2.0
>Reporter: bharath v
>Assignee: Junjie Chen
>
> {code}
> I used tcpdump to display the network traffic: 
> [root@fe01 ~]# beeline 
> Beeline version 0.13.1-cdh5.3.2 by Apache Hive 
> beeline> !connect jdbc:hive2://fe01.sectest.poc:1/default 
> Connecting to jdbc:hive2://fe01.sectest.poc:1/default 
> Enter username for jdbc:hive2://fe01.sectest.poc:1/default: tdaranyi 
> Enter password for jdbc:hive2://fe01.sectest.poc:1/default: * 
> (I entered "cleartext" as the password) 
> The tcpdump in a different window 
> tdara...@fe01.sectest.poc:~$ sudo tcpdump -n -X -i lo port 1 
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode 
> listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes 
> (...) 
> 10:25:16.329974 IP 192.168.32.102.54322 > 192.168.32.102.ndmp: Flags [P.], 
> seq 11:35, ack 1, win 512, options [nop,nop,TS val 2412851969 ecr 
> 2412851969], length 24 
> 0x: 4500 004c 3dd3 4000 4006 3abc c0a8 2066 E..L=.@.@.:f 
> 0x0010: c0a8 2066 d432 2710 714c 0edc b45c 9268 ...f.2'.qL...\.h 
> 0x0020: 8018 0200 c25b  0101 080a 8fd1 3301 .[3. 
> 0x0030: 8fd1 3301 0500  1300 7464 6172 616e ..3...tdaran 
> 0x0040: 7969 0063 6c65 6172 7465 7874 yi.cleartext 
> (...) 
> {code}
> We rely on the user supplied configuration to decide whether to open an SSL 
> socket or a Plain one. Instead we can negotiate this information from the HS2 
> and connect accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14035) Enable predicate pushdown to delta files created by ACID Transactions

2016-08-04 Thread Saket Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408659#comment-15408659
 ] 

Saket Saurabh commented on HIVE-14035:
--

Thanks [~sershe] for the comments and the feedback. I am working on addressing 
them. Sure, I will also put up a separate patch with default enabled for HiveQA 
run.

> Enable predicate pushdown to delta files created by ACID Transactions
> -
>
> Key: HIVE-14035
> URL: https://issues.apache.org/jira/browse/HIVE-14035
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
> Attachments: HIVE-14035.02.patch, HIVE-14035.03.patch, 
> HIVE-14035.04.patch, HIVE-14035.05.patch, HIVE-14035.06.patch, 
> HIVE-14035.07.patch, HIVE-14035.08.patch, HIVE-14035.09.patch, 
> HIVE-14035.10.patch, HIVE-14035.11.patch, HIVE-14035.12.patch, 
> HIVE-14035.patch
>
>
> In current Hive version, delta files created by ACID transactions do not 
> allow predicate pushdown if they contain any update/delete events. This is 
> done to preserve correctness when following a multi-version approach during 
> event collapsing, where an update event overwrites an existing insert event. 
> This JIRA proposes to split an update event into a combination of a delete 
> event followed by a new insert event, that can enable predicate push down to 
> all delta files without breaking correctness. To support backward 
> compatibility for this feature, this JIRA also proposes to add some sort of 
> versioning to ACID that can allow different versions of ACID transactions to 
> co-exist together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14415) Upgrade qtest execution framework to junit4 - TestPerfCliDriver

2016-08-04 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408651#comment-15408651
 ] 

Zoltan Haindrich commented on HIVE-14415:
-

[~pvary] thank you for the review! i was not aware of HIVE-12033...
after these changes - the execution of these tests in ide won't be readily 
available but will be a step closer...i've seen a few things which will have to 
be solved prior to getting a working solution...like:
* qtests sometime contain some exact pathreference to datafiles...somewhere 
they are using properties...i'm not sure those will work or not (in case of 
-Pitests)
* there a few maven repo references here and there - not sure how this can be 
avoided
* there is a "poisoning" core-site.xml in the common module...but generally the 
ide shows all the resources to the test...which may get confused by the union 
of all hive main resources

these are just what i know of...if we team up, we can get a fully working ide 
faster ;)


> Upgrade qtest execution framework to junit4 - TestPerfCliDriver
> ---
>
> Key: HIVE-14415
> URL: https://issues.apache.org/jira/browse/HIVE-14415
> Project: Hive
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14415.1.patch, HIVE-14415.2.patch
>
>
> I would like to upgrade the current maven+ant+velocimacro+junit4 qtest 
> generation framework to use only junit4 - while (trying) to keep 
> all the existing features it provides.
> What I can't really do with the current one: execute easily a single qtests 
> from an IDE (as a matter of fact I can...but it's way too complicated; after 
> this it won't be a cake-walk either...but it will be a step closer ;)
> I think this change will make it more clear how these tests are configured 
> and executed.
> I will do this in two phases, currently i will only change 
> {{TestPerfCliDriver}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14342:
-
Attachment: HIVE-14342.patch

> Beeline output is garbled when executed from a remote shell
> ---
>
> Key: HIVE-14342
> URL: https://issues.apache.org/jira/browse/HIVE-14342
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14342.patch, HIVE-14342.patch
>
>
> {code}
> use default;
> create table clitest (key int, name String, value String);
> insert into table clitest values 
> (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0");
> {code}
> then run a select query
> {code} 
> # cat /tmp/select.sql 
> set hive.execution.engine=mr;
> select key,name,value 
> from clitest 
> where value="1" limit 1;
> {code}
> Then run beeline via a remote shell, for example
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/select.sql" 
> root@'s password: 
> 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> nullkey,name,value 
> 1,TRUE,1
> null   
> $
> {code}
> In older releases that the output is as follows
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/run.sql" 
> Are you sure you want to continue connecting (yes/no)? yes
> root@'s password: 
> 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> key,name,value
> 1,TRUE,1
> $
> {code}
> The output contains nulls instead of blank lines. This is due to the use of 
> -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able 
> to run beeline as a background process. But this is the unfortunate side 
> effect of that fix.
> Running beeline in background also produces garbled output.
> {code}
> # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true 
> --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > 
> /tmp/beeline.txt &
> # cat /tmp/beeline.txt 
> null1,TRUE,1   
> #
> {code}
> So I think the use of jline.UnsupportedTerminal should be documented but not 
> used automatically by beeline under the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14342:
-
Status: Open  (was: Patch Available)

Pre-commit build failed because of build failure due to missing file from 
HIVE-14204. Retrying

> Beeline output is garbled when executed from a remote shell
> ---
>
> Key: HIVE-14342
> URL: https://issues.apache.org/jira/browse/HIVE-14342
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14342.patch, HIVE-14342.patch
>
>
> {code}
> use default;
> create table clitest (key int, name String, value String);
> insert into table clitest values 
> (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0");
> {code}
> then run a select query
> {code} 
> # cat /tmp/select.sql 
> set hive.execution.engine=mr;
> select key,name,value 
> from clitest 
> where value="1" limit 1;
> {code}
> Then run beeline via a remote shell, for example
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/select.sql" 
> root@'s password: 
> 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> nullkey,name,value 
> 1,TRUE,1
> null   
> $
> {code}
> In older releases that the output is as follows
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/run.sql" 
> Are you sure you want to continue connecting (yes/no)? yes
> root@'s password: 
> 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> key,name,value
> 1,TRUE,1
> $
> {code}
> The output contains nulls instead of blank lines. This is due to the use of 
> -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able 
> to run beeline as a background process. But this is the unfortunate side 
> effect of that fix.
> Running beeline in background also produces garbled output.
> {code}
> # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true 
> --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > 
> /tmp/beeline.txt &
> # cat /tmp/beeline.txt 
> null1,TRUE,1   
> #
> {code}
> So I think the use of jline.UnsupportedTerminal should be documented but not 
> used automatically by beeline under the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14342:
-
Status: Patch Available  (was: Open)

> Beeline output is garbled when executed from a remote shell
> ---
>
> Key: HIVE-14342
> URL: https://issues.apache.org/jira/browse/HIVE-14342
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14342.patch, HIVE-14342.patch
>
>
> {code}
> use default;
> create table clitest (key int, name String, value String);
> insert into table clitest values 
> (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0");
> {code}
> then run a select query
> {code} 
> # cat /tmp/select.sql 
> set hive.execution.engine=mr;
> select key,name,value 
> from clitest 
> where value="1" limit 1;
> {code}
> Then run beeline via a remote shell, for example
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/select.sql" 
> root@'s password: 
> 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> nullkey,name,value 
> 1,TRUE,1
> null   
> $
> {code}
> In older releases that the output is as follows
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/run.sql" 
> Are you sure you want to continue connecting (yes/no)? yes
> root@'s password: 
> 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> key,name,value
> 1,TRUE,1
> $
> {code}
> The output contains nulls instead of blank lines. This is due to the use of 
> -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able 
> to run beeline as a background process. But this is the unfortunate side 
> effect of that fix.
> Running beeline in background also produces garbled output.
> {code}
> # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true 
> --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > 
> /tmp/beeline.txt &
> # cat /tmp/beeline.txt 
> null1,TRUE,1   
> #
> {code}
> So I think the use of jline.UnsupportedTerminal should be documented but not 
> used automatically by beeline under the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14434) Vectorization: BytesBytes lookup capped count can be =0, =1, >=2

2016-08-04 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14434:
---
Status: Patch Available  (was: Open)

> Vectorization: BytesBytes lookup capped count can be =0, =1, >=2
> 
>
> Key: HIVE-14434
> URL: https://issues.apache.org/jira/browse/HIVE-14434
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14434.1.patch
>
>
> BytesBytesHashmap does not implement deep counters for the depth of a probe - 
> but however it does distinguish between 0, 1 and > 1 rows.
> This information can be used in the vectorized hash join to avoid copying the 
> probe side keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408628#comment-15408628
 ] 

Hive QA commented on HIVE-14342:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822163/HIVE-14342.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/777/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/777/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-777/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
 [copy] Copying 15 files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
spark-client ---
[INFO] Compiling 5 source files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/test-classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:copy (copy-guava-14) @ spark-client ---
[INFO] Configured Artifact: com.google.guava:guava:14.0.1:jar
[INFO] Copying guava-14.0.1.jar to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/dependency/guava-14.0.1.jar
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ spark-client ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ spark-client ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
spark-client ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ spark-client ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Query Language 2.2.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-exec ---
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-exec ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (generate-sources) @ hive-exec ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
Generating vector expression code
Generating vector expression test code
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec ---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/src/gen/thrift/gen-javabean
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java
 added.
[INFO] 
[INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec ---
[INFO] ANTLR: Processing source directory 
/data/hive-ptest/working/apache-github-source-source/ql/src/java
ANTLR Parser Generator  Version 3.4
org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec 
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.

[jira] [Commented] (HIVE-14431) Recognize COALESCE as CASE and extend CASE simplification to cover more cases

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408623#comment-15408623
 ] 

Hive QA commented on HIVE-14431:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822143/HIVE-14431.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/776/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/776/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-776/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
 [copy] Copying 15 files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
spark-client ---
[INFO] Compiling 5 source files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/test-classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:copy (copy-guava-14) @ spark-client ---
[INFO] Configured Artifact: com.google.guava:guava:14.0.1:jar
[INFO] Copying guava-14.0.1.jar to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/dependency/guava-14.0.1.jar
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ spark-client ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ spark-client ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
spark-client ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ spark-client ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Query Language 2.2.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-exec ---
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-exec ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (generate-sources) @ hive-exec ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
Generating vector expression code
Generating vector expression test code
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec ---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/src/gen/thrift/gen-javabean
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java
 added.
[INFO] 
[INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec ---
[INFO] ANTLR: Processing source directory 
/data/hive-ptest/working/apache-github-source-source/ql/src/java
ANTLR Parser Generator  Version 3.4
org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec 
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.

[jira] [Commented] (HIVE-14413) Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and extract more deterministic pieces out

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408620#comment-15408620
 ] 

Hive QA commented on HIVE-14413:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822120/HIVE-14413.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/775/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/775/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-775/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
 [copy] Copying 15 files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
spark-client ---
[INFO] Compiling 5 source files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/test-classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:copy (copy-guava-14) @ spark-client ---
[INFO] Configured Artifact: com.google.guava:guava:14.0.1:jar
[INFO] Copying guava-14.0.1.jar to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/dependency/guava-14.0.1.jar
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ spark-client ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ spark-client ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
spark-client ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ spark-client ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Query Language 2.2.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-exec ---
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-exec ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (generate-sources) @ hive-exec ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
Generating vector expression code
Generating vector expression test code
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec ---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/src/gen/thrift/gen-javabean
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java
 added.
[INFO] 
[INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec ---
[INFO] ANTLR: Processing source directory 
/data/hive-ptest/working/apache-github-source-source/ql/src/java
ANTLR Parser Generator  Version 3.4
org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec 
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.

[jira] [Commented] (HIVE-14378) Data size may be estimated as 0 if no columns are being projected after an operator

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408616#comment-15408616
 ] 

Hive QA commented on HIVE-14378:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12821934/HIVE-14378.4.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/774/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/774/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-774/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
 [copy] Copying 15 files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
spark-client ---
[INFO] Compiling 5 source files to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/test-classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:copy (copy-guava-14) @ spark-client ---
[INFO] Configured Artifact: com.google.guava:guava:14.0.1:jar
[INFO] Copying guava-14.0.1.jar to 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/dependency/guava-14.0.1.jar
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ spark-client ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ spark-client ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
spark-client ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ spark-client ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/spark-client/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Query Language 2.2.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-exec ---
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-exec ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (generate-sources) @ hive-exec ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/gen
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen
Generating vector expression code
Generating vector expression test code
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec ---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/src/gen/thrift/gen-javabean
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java
 added.
[INFO] 
[INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec ---
[INFO] ANTLR: Processing source directory 
/data/hive-ptest/working/apache-github-source-source/ql/src/java
ANTLR Parser Generator  Version 3.4
org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec 
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.

[jira] [Commented] (HIVE-14270) Write temporary data to HDFS when doing inserts on tables located on S3

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408615#comment-15408615
 ] 

Hive QA commented on HIVE-14270:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822110/HIVE-14270.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 72 failed/errored test(s), 10444 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_reordering_values
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_output_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_windowing_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_column_access_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_filter_join_breaktask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_multi_single_reducer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_infer_bucket_sort_multi_insert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32_lessSize
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join35
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_map_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataonly1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pcr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pointlookup2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pointlookup3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_union_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_vc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rand_partitionpruner2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union24
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union34
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unionDistinct_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_ptf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_windowing
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_union_stats
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5

[jira] [Commented] (HIVE-14434) Vectorization: BytesBytes lookup capped count can be =0, =1, >=2

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408596#comment-15408596
 ] 

Sergey Shelukhin commented on HIVE-14434:
-

"2" value needs to be explained in a comment... otherwise +1

> Vectorization: BytesBytes lookup capped count can be =0, =1, >=2
> 
>
> Key: HIVE-14434
> URL: https://issues.apache.org/jira/browse/HIVE-14434
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14434.1.patch
>
>
> BytesBytesHashmap does not implement deep counters for the depth of a probe - 
> but however it does distinguish between 0, 1 and > 1 rows.
> This information can be used in the vectorized hash join to avoid copying the 
> probe side keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14434) Vectorization: BytesBytes lookup capped count can be =0, =1, >=2

2016-08-04 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14434:
---
Attachment: HIVE-14434.1.patch

> Vectorization: BytesBytes lookup capped count can be =0, =1, >=2
> 
>
> Key: HIVE-14434
> URL: https://issues.apache.org/jira/browse/HIVE-14434
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14434.1.patch
>
>
> BytesBytesHashmap does not implement deep counters for the depth of a probe - 
> but however it does distinguish between 0, 1 and > 1 rows.
> This information can be used in the vectorized hash join to avoid copying the 
> probe side keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14402) Vectorization: Fix Mapjoin overflow deserialization

2016-08-04 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14402:
---
Status: Open  (was: Patch Available)

> Vectorization: Fix Mapjoin overflow deserialization 
> 
>
> Key: HIVE-14402
> URL: https://issues.apache.org/jira/browse/HIVE-14402
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14402.1.patch, HIVE-14402.2.patch
>
>
> This is in a codepath currently disabled in master, however enabling it 
> triggers OOB.
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
> at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setRef(BytesColumnVector.java:92)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeRowColumn(VectorDeserializeRow.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserialize(VectorDeserializeRow.java:674)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultLargeMultiValue(VectorMapJoinGenerateResultOperator.java:307)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:226)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultRepeatedAll(VectorMapJoinGenerateResultOperator.java:391)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14402) Vectorization: Fix Mapjoin overflow deserialization

2016-08-04 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14402:
---
Attachment: HIVE-14402.2.patch

> Vectorization: Fix Mapjoin overflow deserialization 
> 
>
> Key: HIVE-14402
> URL: https://issues.apache.org/jira/browse/HIVE-14402
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14402.1.patch, HIVE-14402.2.patch
>
>
> This is in a codepath currently disabled in master, however enabling it 
> triggers OOB.
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
> at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setRef(BytesColumnVector.java:92)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeRowColumn(VectorDeserializeRow.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserialize(VectorDeserializeRow.java:674)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultLargeMultiValue(VectorMapJoinGenerateResultOperator.java:307)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:226)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultRepeatedAll(VectorMapJoinGenerateResultOperator.java:391)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14402) Vectorization: Fix Mapjoin overflow deserialization

2016-08-04 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14402:
---
Status: Patch Available  (was: Open)

> Vectorization: Fix Mapjoin overflow deserialization 
> 
>
> Key: HIVE-14402
> URL: https://issues.apache.org/jira/browse/HIVE-14402
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-14402.1.patch, HIVE-14402.2.patch
>
>
> This is in a codepath currently disabled in master, however enabling it 
> triggers OOB.
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
> at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setRef(BytesColumnVector.java:92)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeRowColumn(VectorDeserializeRow.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserialize(VectorDeserializeRow.java:674)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultLargeMultiValue(VectorMapJoinGenerateResultOperator.java:307)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:226)
> at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultRepeatedAll(VectorMapJoinGenerateResultOperator.java:391)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14035) Enable predicate pushdown to delta files created by ACID Transactions

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408584#comment-15408584
 ] 

Sergey Shelukhin commented on HIVE-14035:
-

Actually, one good thing to do would be to enable it by default in the next 
patch and see what HiveQA run, with all the existing ACID tests running in new 
mode, shows.

> Enable predicate pushdown to delta files created by ACID Transactions
> -
>
> Key: HIVE-14035
> URL: https://issues.apache.org/jira/browse/HIVE-14035
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
> Attachments: HIVE-14035.02.patch, HIVE-14035.03.patch, 
> HIVE-14035.04.patch, HIVE-14035.05.patch, HIVE-14035.06.patch, 
> HIVE-14035.07.patch, HIVE-14035.08.patch, HIVE-14035.09.patch, 
> HIVE-14035.10.patch, HIVE-14035.11.patch, HIVE-14035.12.patch, 
> HIVE-14035.patch
>
>
> In current Hive version, delta files created by ACID transactions do not 
> allow predicate pushdown if they contain any update/delete events. This is 
> done to preserve correctness when following a multi-version approach during 
> event collapsing, where an update event overwrites an existing insert event. 
> This JIRA proposes to split an update event into a combination of a delete 
> event followed by a new insert event, that can enable predicate push down to 
> all delta files without breaking correctness. To support backward 
> compatibility for this feature, this JIRA also proposes to add some sort of 
> versioning to ACID that can allow different versions of ACID transactions to 
> co-exist together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14433) refactor LLAP plan cache avoidance and fix issue in merge processor

2016-08-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14433:

Summary: refactor LLAP plan cache avoidance and fix issue in merge 
processor  (was: refactor LLAP plan cache avoidance and fix issue in merge)

> refactor LLAP plan cache avoidance and fix issue in merge processor
> ---
>
> Key: HIVE-14433
> URL: https://issues.apache.org/jira/browse/HIVE-14433
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> Map and reduce processors do this:
> {noformat}
> if (LlapProxy.isDaemon()) {
>   cache = new org.apache.hadoop.hive.ql.exec.mr.ObjectCache(); // do not 
> cache plan
> ...
> {noformat}
> but merge processor just gets the plan. If it runs in LLAP, it can get a 
> cached plan. Need to move this logic into ObjectCache itself, via a isPlan 
> arg or something. That will also fix this issue for merge processor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14233) Improve vectorization for ACID by eliminating row-by-row stitching

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408541#comment-15408541
 ] 

Sergey Shelukhin commented on HIVE-14233:
-

Is it possible to post an RB? An update to RB (or the initial post if done via 
rb tool, I assume) allows one to have a base patch (HIVE-14035 patch in this 
case, I assume)...

> Improve vectorization for ACID by eliminating row-by-row stitching
> --
>
> Key: HIVE-14233
> URL: https://issues.apache.org/jira/browse/HIVE-14233
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions, Vectorization
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
> Attachments: HIVE-14233.01.patch, HIVE-14233.02.patch, 
> HIVE-14233.03.patch, HIVE-14233.04.patch, HIVE-14233.05.patch
>
>
> This JIRA proposes to improve vectorization for ACID by eliminating 
> row-by-row stitching when reading back ACID files. In the current 
> implementation, a vectorized row batch is created by populating the batch one 
> row at a time, before the vectorized batch is passed up along the operator 
> pipeline. This row-by-row stitching limitation was because of the fact that 
> the ACID insert/update/delete events from various delta files needed to be 
> merged together before the actual version of a given row was found out. 
> HIVE-14035 has enabled us to break away from that limitation by splitting 
> ACID update events into a combination of delete+insert. In fact, it has now 
> enabled us to create splits on delta files.
> Building on top of HIVE-14035, this JIRA proposes to solve this earlier 
> bottleneck in the vectorized code path for ACID by now directly reading row 
> batches from the underlying ORC files and avoiding any stitching altogether. 
> Once a row batch is read from the split (which may be on a base/delta file), 
> the deleted rows will be found by cross-referencing them against a data 
> structure that will just keep track of deleted events (found in the 
> deleted_delta files). This will lead to a large performance gain when reading 
> ACID files in vectorized fashion, while enabling further optimizations in 
> future that can be done on top of that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14035) Enable predicate pushdown to delta files created by ACID Transactions

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408536#comment-15408536
 ] 

Sergey Shelukhin commented on HIVE-14035:
-

Also, I think q file tests are needed...

> Enable predicate pushdown to delta files created by ACID Transactions
> -
>
> Key: HIVE-14035
> URL: https://issues.apache.org/jira/browse/HIVE-14035
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
> Attachments: HIVE-14035.02.patch, HIVE-14035.03.patch, 
> HIVE-14035.04.patch, HIVE-14035.05.patch, HIVE-14035.06.patch, 
> HIVE-14035.07.patch, HIVE-14035.08.patch, HIVE-14035.09.patch, 
> HIVE-14035.10.patch, HIVE-14035.11.patch, HIVE-14035.12.patch, 
> HIVE-14035.patch
>
>
> In current Hive version, delta files created by ACID transactions do not 
> allow predicate pushdown if they contain any update/delete events. This is 
> done to preserve correctness when following a multi-version approach during 
> event collapsing, where an update event overwrites an existing insert event. 
> This JIRA proposes to split an update event into a combination of a delete 
> event followed by a new insert event, that can enable predicate push down to 
> all delta files without breaking correctness. To support backward 
> compatibility for this feature, this JIRA also proposes to add some sort of 
> versioning to ACID that can allow different versions of ACID transactions to 
> co-exist together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-14433) refactor LLAP plan cache avoidance and fix issue in merge

2016-08-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14433:

Comment: was deleted

(was: [~prasanth_j] fyi)

> refactor LLAP plan cache avoidance and fix issue in merge
> -
>
> Key: HIVE-14433
> URL: https://issues.apache.org/jira/browse/HIVE-14433
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> Map and reduce processors do this:
> {noformat}
> if (LlapProxy.isDaemon()) {
>   cache = new org.apache.hadoop.hive.ql.exec.mr.ObjectCache(); // do not 
> cache plan
> ...
> {noformat}
> but merge processor just gets the plan. If it runs in LLAP, it can get a 
> cached plan. Need to move this logic into ObjectCache itself, via a isPlan 
> arg or something. That will also fix this issue for merge processor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14035) Enable predicate pushdown to delta files created by ACID Transactions

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408527#comment-15408527
 ] 

Sergey Shelukhin commented on HIVE-14035:
-

Some CR feedback on RB, mostly minor. Someone more familiar with ACID should 
review the core logic, ideally. It makes sense to me.

> Enable predicate pushdown to delta files created by ACID Transactions
> -
>
> Key: HIVE-14035
> URL: https://issues.apache.org/jira/browse/HIVE-14035
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
> Attachments: HIVE-14035.02.patch, HIVE-14035.03.patch, 
> HIVE-14035.04.patch, HIVE-14035.05.patch, HIVE-14035.06.patch, 
> HIVE-14035.07.patch, HIVE-14035.08.patch, HIVE-14035.09.patch, 
> HIVE-14035.10.patch, HIVE-14035.11.patch, HIVE-14035.12.patch, 
> HIVE-14035.patch
>
>
> In current Hive version, delta files created by ACID transactions do not 
> allow predicate pushdown if they contain any update/delete events. This is 
> done to preserve correctness when following a multi-version approach during 
> event collapsing, where an update event overwrites an existing insert event. 
> This JIRA proposes to split an update event into a combination of a delete 
> event followed by a new insert event, that can enable predicate push down to 
> all delta files without breaking correctness. To support backward 
> compatibility for this feature, this JIRA also proposes to add some sort of 
> versioning to ACID that can allow different versions of ACID transactions to 
> co-exist together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14415) Upgrade qtest execution framework to junit4 - TestPerfCliDriver

2016-08-04 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14415:

Attachment: HIVE-14415.2.patch

> Upgrade qtest execution framework to junit4 - TestPerfCliDriver
> ---
>
> Key: HIVE-14415
> URL: https://issues.apache.org/jira/browse/HIVE-14415
> Project: Hive
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14415.1.patch, HIVE-14415.2.patch
>
>
> I would like to upgrade the current maven+ant+velocimacro+junit4 qtest 
> generation framework to use only junit4 - while (trying) to keep 
> all the existing features it provides.
> What I can't really do with the current one: execute easily a single qtests 
> from an IDE (as a matter of fact I can...but it's way too complicated; after 
> this it won't be a cake-walk either...but it will be a step closer ;)
> I think this change will make it more clear how these tests are configured 
> and executed.
> I will do this in two phases, currently i will only change 
> {{TestPerfCliDriver}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14433) refactor LLAP plan cache avoidance and fix issue in merge

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408493#comment-15408493
 ] 

Sergey Shelukhin commented on HIVE-14433:
-

[~prasanth_j] fyi

> refactor LLAP plan cache avoidance and fix issue in merge
> -
>
> Key: HIVE-14433
> URL: https://issues.apache.org/jira/browse/HIVE-14433
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> Map and reduce processors do this:
> {noformat}
> if (LlapProxy.isDaemon()) {
>   cache = new org.apache.hadoop.hive.ql.exec.mr.ObjectCache(); // do not 
> cache plan
> ...
> {noformat}
> but merge processor just gets the plan. If it runs in LLAP, it can get a 
> cached plan. Need to move this logic into ObjectCache itself, via a isPlan 
> arg or something. That will also fix this issue for merge processor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14433) refactor LLAP plan cache avoidance and fix issue in merge

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408494#comment-15408494
 ] 

Sergey Shelukhin commented on HIVE-14433:
-

[~prasanth_j] fyi

> refactor LLAP plan cache avoidance and fix issue in merge
> -
>
> Key: HIVE-14433
> URL: https://issues.apache.org/jira/browse/HIVE-14433
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> Map and reduce processors do this:
> {noformat}
> if (LlapProxy.isDaemon()) {
>   cache = new org.apache.hadoop.hive.ql.exec.mr.ObjectCache(); // do not 
> cache plan
> ...
> {noformat}
> but merge processor just gets the plan. If it runs in LLAP, it can get a 
> cached plan. Need to move this logic into ObjectCache itself, via a isPlan 
> arg or something. That will also fix this issue for merge processor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14204) Optimize loading dynamic partitions

2016-08-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14204:

Component/s: Metastore

> Optimize loading dynamic partitions 
> 
>
> Key: HIVE-14204
> URL: https://issues.apache.org/jira/browse/HIVE-14204
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14204.1.patch, HIVE-14204.3.patch, 
> HIVE-14204.4.patch, HIVE-14204.6.patch, HIVE-14204.7.patch, HIVE-14204.8.patch
>
>
> Lots of time is spent in sequential fashion to load dynamic partitioned 
> dataset in driver side. E.g simple dynamic partitioned load as follows takes 
> 300+ seconds
> {noformat}
> INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
> tpcds_bin_partitioned_orc_200.web_sales;
> Time taken to load dynamic partitions: 309.22 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14204) Optimize loading dynamic partitions

2016-08-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14204:

   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Rajesh!

> Optimize loading dynamic partitions 
> 
>
> Key: HIVE-14204
> URL: https://issues.apache.org/jira/browse/HIVE-14204
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14204.1.patch, HIVE-14204.3.patch, 
> HIVE-14204.4.patch, HIVE-14204.6.patch, HIVE-14204.7.patch, HIVE-14204.8.patch
>
>
> Lots of time is spent in sequential fashion to load dynamic partitioned 
> dataset in driver side. E.g simple dynamic partitioned load as follows takes 
> 300+ seconds
> {noformat}
> INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
> tpcds_bin_partitioned_orc_200.web_sales;
> Time taken to load dynamic partitions: 309.22 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14342:
-
Status: Patch Available  (was: Open)

> Beeline output is garbled when executed from a remote shell
> ---
>
> Key: HIVE-14342
> URL: https://issues.apache.org/jira/browse/HIVE-14342
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14342.patch
>
>
> {code}
> use default;
> create table clitest (key int, name String, value String);
> insert into table clitest values 
> (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0");
> {code}
> then run a select query
> {code} 
> # cat /tmp/select.sql 
> set hive.execution.engine=mr;
> select key,name,value 
> from clitest 
> where value="1" limit 1;
> {code}
> Then run beeline via a remote shell, for example
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/select.sql" 
> root@'s password: 
> 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> nullkey,name,value 
> 1,TRUE,1
> null   
> $
> {code}
> In older releases that the output is as follows
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/run.sql" 
> Are you sure you want to continue connecting (yes/no)? yes
> root@'s password: 
> 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> key,name,value
> 1,TRUE,1
> $
> {code}
> The output contains nulls instead of blank lines. This is due to the use of 
> -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able 
> to run beeline as a background process. But this is the unfortunate side 
> effect of that fix.
> Running beeline in background also produces garbled output.
> {code}
> # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true 
> --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > 
> /tmp/beeline.txt &
> # cat /tmp/beeline.txt 
> null1,TRUE,1   
> #
> {code}
> So I think the use of jline.UnsupportedTerminal should be documented but not 
> used automatically by beeline under the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14342) Beeline output is garbled when executed from a remote shell

2016-08-04 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408468#comment-15408468
 ] 

Naveen Gangam commented on HIVE-14342:
--

After quite a bit of testing, I found that the most reliable means to detect if 
beeline is being run via a remote shell is to check if the  for the 
process is a pipe.
So I have added an additional check to the condition before using jline's 
UnsupportedTerminal. The use of UnsupportedTerminal should be limited to 
beeline processes running locally as a background process.

The output now is as we expected prior to this change.
{code}
ssh -l root  "sudo -u hive beeline -u 
jdbc:hive2://localhost:1 -n hive -p hive --silent=true --outputformat=csv2 
-f /tmp/run.sql"
root@'s password: 
16/08/04 13:46:35 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
module jar containing PrefixTreeCodec is not present.  Continuing without it.





key,name,value
1,TRUE,1


$
{code}

> Beeline output is garbled when executed from a remote shell
> ---
>
> Key: HIVE-14342
> URL: https://issues.apache.org/jira/browse/HIVE-14342
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>
> {code}
> use default;
> create table clitest (key int, name String, value String);
> insert into table clitest values 
> (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0");
> {code}
> then run a select query
> {code} 
> # cat /tmp/select.sql 
> set hive.execution.engine=mr;
> select key,name,value 
> from clitest 
> where value="1" limit 1;
> {code}
> Then run beeline via a remote shell, for example
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/select.sql" 
> root@'s password: 
> 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> nullkey,name,value 
> 1,TRUE,1
> null   
> $
> {code}
> In older releases that the output is as follows
> {code}
> $ ssh -l root  "sudo -u hive beeline -u 
> jdbc:hive2://localhost:1 -n hive -p hive --silent=true 
> --outputformat=csv2 -f /tmp/run.sql" 
> Are you sure you want to continue connecting (yes/no)? yes
> root@'s password: 
> 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree 
> module jar containing PrefixTreeCodec is not present.  Continuing without it.
> key,name,value
> 1,TRUE,1
> $
> {code}
> The output contains nulls instead of blank lines. This is due to the use of 
> -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able 
> to run beeline as a background process. But this is the unfortunate side 
> effect of that fix.
> Running beeline in background also produces garbled output.
> {code}
> # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true 
> --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > 
> /tmp/beeline.txt &
> # cat /tmp/beeline.txt 
> null1,TRUE,1   
> #
> {code}
> So I think the use of jline.UnsupportedTerminal should be documented but not 
> used automatically by beeline under the covers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-08-04 Thread Dmitry Tolpeko (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408460#comment-15408460
 ] 

Dmitry Tolpeko commented on HIVE-14382:
---

Patch committed. 

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>  Components: hpl/sql
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-08-04 Thread Dmitry Tolpeko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Tolpeko updated HIVE-14382:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>  Components: hpl/sql
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14421) FS.deleteOnExit holds references to _tmp_space.db files

2016-08-04 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408459#comment-15408459
 ] 

Gunther Hagleitner commented on HIVE-14421:
---

+1 Test run looks ok to me.

> FS.deleteOnExit holds references to _tmp_space.db files
> ---
>
> Key: HIVE-14421
> URL: https://issues.apache.org/jira/browse/HIVE-14421
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14421.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408451#comment-15408451
 ] 

Hive QA commented on HIVE-14411:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12822041/HIVE-14411.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10425 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-vectorization_16.q-schema_evol_text_vec_mapwork_part_all_complex.q-vector_acid3.q-and-12-more
 - did not produce a TEST-*.xml file
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/772/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/772/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-772/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12822041 - PreCommit-HIVE-MASTER-Build

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-08-04 Thread Dmitry Tolpeko (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408440#comment-15408440
 ] 

Dmitry Tolpeko commented on HIVE-14382:
---

Reviewed and updated docs.

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>  Components: hpl/sql
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-08-04 Thread Dmitry Tolpeko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Tolpeko updated HIVE-14382:
--
Component/s: hpl/sql

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>  Components: hpl/sql
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14129) Execute move tasks in parallel

2016-08-04 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408426#comment-15408426
 ] 

Eugene Koifman commented on HIVE-14129:
---

[~thejas], which patch should I look at - here we just toggle 1 line

> Execute move tasks in parallel
> --
>
> Key: HIVE-14129
> URL: https://issues.apache.org/jira/browse/HIVE-14129
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14129.2.patch, HIVE-14129.patch, HIVE-14129.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14432) LLAP signing unit test may be timing-dependent

2016-08-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14432:

Component/s: Test

> LLAP signing unit test may be timing-dependent
> --
>
> Key: HIVE-14432
> URL: https://issues.apache.org/jira/browse/HIVE-14432
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14432.patch
>
>
> Seems like it's possible for slow background thread to roll the key after we 
> have signed with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14432) LLAP signing unit test may be timing-dependent

2016-08-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14432:

Attachment: HIVE-14432.patch

[~prasanth_j] [~jdere] can you take a look?

> LLAP signing unit test may be timing-dependent
> --
>
> Key: HIVE-14432
> URL: https://issues.apache.org/jira/browse/HIVE-14432
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14432.patch
>
>
> Seems like it's possible for slow background thread to roll the key after we 
> have signed with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14431) Recognize COALESCE as CASE and extend CASE simplification to cover more cases

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14431:
---
Attachment: HIVE-14431.patch

> Recognize COALESCE as CASE and extend CASE simplification to cover more cases
> -
>
> Key: HIVE-14431
> URL: https://issues.apache.org/jira/browse/HIVE-14431
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14431.patch
>
>
> Transform:
> {code}
> (COALESCE(a, '')  = '') OR
>(a = 'A' AND b = c)  OR
>(a = 'B' AND b = d) OR
>(a = 'C' AND b = e) OR
>(a = 'D' AND b = f) OR
>(a = 'E' AND b = g) OR
>(a = 'F' AND b = h)
> {code}
> into:
> {code}
> (a='') OR
>(a is null) OR
>(a = 'A' AND b = c)  OR
>(a = 'B' AND b = d) OR
>(a = 'C' AND b = e) OR
>(a = 'D' AND b = f) OR
>(a = 'E' AND b = g) OR
>(a = 'F' AND b = h)
> {code}
> With complex queries, this will lead us to factor more predicates that could 
> be pushed to the TS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14431) Recognize COALESCE as CASE and extend CASE simplification to cover more cases

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14431:
---
Status: Patch Available  (was: In Progress)

> Recognize COALESCE as CASE and extend CASE simplification to cover more cases
> -
>
> Key: HIVE-14431
> URL: https://issues.apache.org/jira/browse/HIVE-14431
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>
> Transform:
> {code}
> (COALESCE(a, '')  = '') OR
>(a = 'A' AND b = c)  OR
>(a = 'B' AND b = d) OR
>(a = 'C' AND b = e) OR
>(a = 'D' AND b = f) OR
>(a = 'E' AND b = g) OR
>(a = 'F' AND b = h)
> {code}
> into:
> {code}
> (a='') OR
>(a is null) OR
>(a = 'A' AND b = c)  OR
>(a = 'B' AND b = d) OR
>(a = 'C' AND b = e) OR
>(a = 'D' AND b = f) OR
>(a = 'E' AND b = g) OR
>(a = 'F' AND b = h)
> {code}
> With complex queries, this will lead us to factor more predicates that could 
> be pushed to the TS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14431) Recognize COALESCE as CASE and extend CASE simplification to cover more cases

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14431 started by Jesus Camacho Rodriguez.
--
> Recognize COALESCE as CASE and extend CASE simplification to cover more cases
> -
>
> Key: HIVE-14431
> URL: https://issues.apache.org/jira/browse/HIVE-14431
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>
> Transform:
> {code}
> (COALESCE(a, '')  = '') OR
>(a = 'A' AND b = c)  OR
>(a = 'B' AND b = d) OR
>(a = 'C' AND b = e) OR
>(a = 'D' AND b = f) OR
>(a = 'E' AND b = g) OR
>(a = 'F' AND b = h)
> {code}
> into:
> {code}
> (a='') OR
>(a is null) OR
>(a = 'A' AND b = c)  OR
>(a = 'B' AND b = d) OR
>(a = 'C' AND b = e) OR
>(a = 'D' AND b = f) OR
>(a = 'E' AND b = g) OR
>(a = 'F' AND b = h)
> {code}
> With complex queries, this will lead us to factor more predicates that could 
> be pushed to the TS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14204) Optimize loading dynamic partitions

2016-08-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408303#comment-15408303
 ] 

Ashutosh Chauhan commented on HIVE-14204:
-

Seems like method getPartitionsForPath() is not used anywhere. We can get rid 
of it. Other than that +1

> Optimize loading dynamic partitions 
> 
>
> Key: HIVE-14204
> URL: https://issues.apache.org/jira/browse/HIVE-14204
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14204.1.patch, HIVE-14204.3.patch, 
> HIVE-14204.4.patch, HIVE-14204.6.patch, HIVE-14204.7.patch, HIVE-14204.8.patch
>
>
> Lots of time is spent in sequential fashion to load dynamic partitioned 
> dataset in driver side. E.g simple dynamic partitioned load as follows takes 
> 300+ seconds
> {noformat}
> INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
> tpcds_bin_partitioned_orc_200.web_sales;
> Time taken to load dynamic partitions: 309.22 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408281#comment-15408281
 ] 

Chris Nauroth commented on HIVE-14423:
--

Hello [~rajesh.balamohan].  Thank you for the patch.  This looks really 
valuable for S3A, WASB and other file systems backed by blob stores, but I have 
a question about whether it will change load patterns and performance 
characteristics when running on HDFS.

For HDFS, {{getContentSummary}} is a single RPC to the NameNode.  It's possibly 
the most expensive NameNode RPC, at least among the read APIs, because the 
NameNode needs to hold a lock while traversing the entire inode sub-tree.  
However, it does have the benefit of getting all of the calculation done for a 
single path/partition in a single network call, so overall, this Hive algorithm 
is O(N) where N = # partitions.

With this patch, it starts using {{FileSystem#listFiles}} with the recursive 
option, which turns into multiple {{getListing}} NameNode RPCs, one for each 
sub-directory.  The {{getListing}} RPC is less expensive for the NameNode to 
execute compared to {{getContentSummary}}, but overall this algorithm requires 
many more network round-trips: O(N * M) where N = # partitions and M = average 
# directories per partition.

At this point in the Hive code, is it possible that the partitions refer to 
directories in the file system that are multiple levels deep with nested 
sub-directories?  I suspect the answer is yes, because the existing code used 
{{getContentSummary}}, and your patch used the recursive option for 
{{listFiles}}.

Do you think an alternative approach would be to override {{getContentSummary}} 
in {{S3AFileSystem}} and optimize it?  That might look similar to other 
optimizations that are making use of S3 bulk listings, such as HADOOP-13208 and 
HADOOP-13371.

Parallelizing the calls for all partitions looks valuable regardless of which 
approach we take.

Cc [~ste...@apache.org] FYI for when he returns.

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14423.1.patch
>
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14430) More instances of HiveConf and the associated UDFClassLoader than expected

2016-08-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408247#comment-15408247
 ] 

Sergey Shelukhin commented on HIVE-14430:
-

How many HS2 threadpool threads are there? all the metastore stuff is 
threadlocal

> More instances of HiveConf and the associated UDFClassLoader than expected
> --
>
> Key: HIVE-14430
> URL: https://issues.apache.org/jira/browse/HIVE-14430
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Siddharth Seth
>Priority: Critical
>
> 841 instances of HiveConf.
> 831 instances of UDFClassLoader
> This is on a HS2 instance configured to run 10 concurrent queries with LLAP.
> 10 SessionState instances. Something is holding on to the additional 
> HiveConf, UDFClassLoaders - potentially HMSHandler.
> This is with an embedded metastore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14429) HadoopMetrics2Reporter leaks memory if the metrics sink is not configured correctly

2016-08-04 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved HIVE-14429.
---
Resolution: Duplicate

HIVE-14228

> HadoopMetrics2Reporter leaks memory if the metrics sink is not configured 
> correctly
> ---
>
> Key: HIVE-14429
> URL: https://issues.apache.org/jira/browse/HIVE-14429
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Siddharth Seth
>Priority: Critical
>
> About 80MB held after 7 hours of running. Metrics2Collector aggregates only 
> when it's invoked by the Hadoop sink.
> Options - the first one is better IMO.
> 1. Fix Metrics2Collector to aggregate more often, and fix the dependency in 
> Hive accordingly
> 2. Don't setup the metrics sub-system if a sink is not configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14429) HadoopMetrics2Reporter leaks memory if the metrics sink is not configured correctly

2016-08-04 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14429:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: HIVE-13176)

> HadoopMetrics2Reporter leaks memory if the metrics sink is not configured 
> correctly
> ---
>
> Key: HIVE-14429
> URL: https://issues.apache.org/jira/browse/HIVE-14429
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Siddharth Seth
>Priority: Critical
>
> About 80MB held after 7 hours of running. Metrics2Collector aggregates only 
> when it's invoked by the Hadoop sink.
> Options - the first one is better IMO.
> 1. Fix Metrics2Collector to aggregate more often, and fix the dependency in 
> Hive accordingly
> 2. Don't setup the metrics sub-system if a sink is not configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14413) Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and extract more deterministic pieces out

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14413:
---
Attachment: HIVE-14413.02.patch

> Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and 
> extract more deterministic pieces out
> 
>
> Key: HIVE-14413
> URL: https://issues.apache.org/jira/browse/HIVE-14413
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14413.01.patch, HIVE-14413.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14413) Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and extract more deterministic pieces out

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14413:
---
Status: Patch Available  (was: In Progress)

> Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and 
> extract more deterministic pieces out
> 
>
> Key: HIVE-14413
> URL: https://issues.apache.org/jira/browse/HIVE-14413
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14413.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14413) Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and extract more deterministic pieces out

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14413:
---
Status: Open  (was: Patch Available)

> Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and 
> extract more deterministic pieces out
> 
>
> Key: HIVE-14413
> URL: https://issues.apache.org/jira/browse/HIVE-14413
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14413.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14413) Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and extract more deterministic pieces out

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14413 started by Jesus Camacho Rodriguez.
--
> Extend HivePreFilteringRule to traverse inside elements of DNF/CNF and 
> extract more deterministic pieces out
> 
>
> Key: HIVE-14413
> URL: https://issues.apache.org/jira/browse/HIVE-14413
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14413.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14129) Execute move tasks in parallel

2016-08-04 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408166#comment-15408166
 ] 

Thejas M Nair commented on HIVE-14129:
--

[~ashutoshc] It doesn't look like the 2nd issue is fixed. ie -
bq. 2. SessionState is examined by acid code in MoveTask, if it is in a 
different thread that thread will not have the SessionState object available.

TaskRunner would need an additional method - setSessionState, similar to 
setOperationLog, that sets the thread local sessionstate.
Also, we should check if the methods being used in MoveTask on SessionState for 
acid purposes are thread safe. Looking at the method names, I would expect it 
to be threadsafe. cc [~ekoifman]

Without that we might have issues with ACID and execparallel=true.


> Execute move tasks in parallel
> --
>
> Key: HIVE-14129
> URL: https://issues.apache.org/jira/browse/HIVE-14129
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14129.2.patch, HIVE-14129.patch, HIVE-14129.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14424) CLIRestoreTest failing in branch 2.1

2016-08-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408146#comment-15408146
 ] 

Pengcheng Xiong commented on HIVE-14424:


[~prongs], thanks for your attention. Our intention is to make tests exercise 
the security code path to prevent bugs. Your change looks good. +1. Could you 
please submit it for a QA run? Thanks.

> CLIRestoreTest failing in branch 2.1
> 
>
> Key: HIVE-14424
> URL: https://issues.apache.org/jira/browse/HIVE-14424
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-14424.1.patch, HIVE-14424.patch
>
>
> {noformat}
> java.lang.RuntimeException: Error applying authorization policy on hive 
> configuration: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:113)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.getService(CLIServiceRestoreTest.java:48)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.(CLIServiceRestoreTest.java:28)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:195)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:244)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:241)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:836)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1602)
>   at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:126)
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:110)
>   ... 22 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:385)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:812)
>   ... 25 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:375)
>   ... 26 more
> {noformat}
> But is caused by HIVE-14221. Code changes are here: 
> 

[jira] [Commented] (HIVE-14421) FS.deleteOnExit holds references to _tmp_space.db files

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408111#comment-15408111
 ] 

Hive QA commented on HIVE-14421:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12821966/HIVE-14421.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10425 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-insert_values_non_partitioned.q-update_after_multiple_inserts.q-tez_union_dynamic_partition.q-and-12-more
 - did not produce a TEST-*.xml file
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityMultiplePreemptionsSameHost2
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/763/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/763/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-763/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12821966 - PreCommit-HIVE-MASTER-Build

> FS.deleteOnExit holds references to _tmp_space.db files
> ---
>
> Key: HIVE-14421
> URL: https://issues.apache.org/jira/browse/HIVE-14421
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14421.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14270) Write temporary data to HDFS when doing inserts on tables located on S3

2016-08-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-14270:
---
Attachment: HIVE-14270.4.patch

Attaching new patch to run another set of tests.

[~ashutoshc] I removed the duplication of the rename(), and create HDFS scratch 
directories instead. It is simpler than the other code, and less prone to 
errors. Could you help me reviewing it?

> Write temporary data to HDFS when doing inserts on tables located on S3
> ---
>
> Key: HIVE-14270
> URL: https://issues.apache.org/jira/browse/HIVE-14270
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-14270.1.patch, HIVE-14270.2.patch, 
> HIVE-14270.3.patch, HIVE-14270.4.patch
>
>
> Currently, when doing INSERT statements on tables located at S3, Hive writes 
> and reads temporary (or intermediate) files to S3 as well. 
> If HDFS is still the default filesystem on Hive, then we can keep such 
> temporary files on HDFS to keep things run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14029) Update Spark version to 2.0.0

2016-08-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407930#comment-15407930
 ] 

Sergio Peña commented on HIVE-14029:


Let's wait until HIVE-14240 is resolved. The current spark assembly used for 
all itests uses spark 1.6 so this patch won't work. Also, I've heard that spark 
2.0 won't use spark assembly anymore, so we need to depend on spark maven 
dependencies to run the tests.

> Update Spark version to 2.0.0
> -
>
> Key: HIVE-14029
> URL: https://issues.apache.org/jira/browse/HIVE-14029
> Project: Hive
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
>
> There are quite some new optimizations in Spark 2.0.0. We need to bump up 
> Spark to 2.0.0 to benefit those performance improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14204) Optimize loading dynamic partitions

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407722#comment-15407722
 ] 

Hive QA commented on HIVE-14204:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12821961/HIVE-14204.8.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10440 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityMultiplePreemptionsSameHost2
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/762/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/762/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-762/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12821961 - PreCommit-HIVE-MASTER-Build

> Optimize loading dynamic partitions 
> 
>
> Key: HIVE-14204
> URL: https://issues.apache.org/jira/browse/HIVE-14204
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14204.1.patch, HIVE-14204.3.patch, 
> HIVE-14204.4.patch, HIVE-14204.6.patch, HIVE-14204.7.patch, HIVE-14204.8.patch
>
>
> Lots of time is spent in sequential fashion to load dynamic partitioned 
> dataset in driver side. E.g simple dynamic partitioned load as follows takes 
> 300+ seconds
> {noformat}
> INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
> tpcds_bin_partitioned_orc_200.web_sales;
> Time taken to load dynamic partitions: 309.22 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14404) Allow delimiterfordsv to use multiple-character delimiters

2016-08-04 Thread Stephen Measmer (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407680#comment-15407680
 ] 

Stephen Measmer commented on HIVE-14404:


My preference would be #2 as Super CSV does not look like its an Apache 
project.  Making the change in the context of hive would contribute to 
supportability.

> Allow delimiterfordsv to use multiple-character delimiters
> --
>
> Key: HIVE-14404
> URL: https://issues.apache.org/jira/browse/HIVE-14404
> Project: Hive
>  Issue Type: Improvement
>Reporter: Stephen Measmer
>Assignee: Marta Kuczora
>
> HIVE-5871 allows for reading multiple character delimiters.  Would like the 
> ability to use outputformat=dsv and define multiple character delimiters.  
> Today  delimiterfordsv only uses on character even if multiple are passes.
> For example:
> when I use:
> beeline>!set outputformat dsv
> beeline>!set delimiterfordsv "^-^"
>  I get:
> 111201081253106275^31-Oct-2011 
> 00:00:00^Text^201605232823^2016051968232151^201605232823_2016051968232151_0_0_1
>  
> Would like it to be:
> 111201081253106275^-^31-Oct-2011 
> 00:00:00^-^Text^-^201605232823^-^2016051968232151^-^201605232823_2016051968232151_0_0_1
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14128) Parallelize jobClose phases

2016-08-04 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407617#comment-15407617
 ] 

Rajesh Balamohan commented on HIVE-14128:
-

Will revise the patch and upload it. Couple of tests failed locally.

> Parallelize jobClose phases
> ---
>
> Key: HIVE-14128
> URL: https://issues.apache.org/jira/browse/HIVE-14128
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0, 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Rajesh Balamohan
> Attachments: HIVE-14128.1.patch, HIVE-14128.master.2.patch, 
> HIVE-14128.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14304) Beeline command will fail when entireLineAsCommand set to true

2016-08-04 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao reassigned HIVE-14304:
---

Assignee: Hari Sankar Sivarama Subramaniyan  (was: Niklaus Xiao)

> Beeline command will fail when entireLineAsCommand set to true
> --
>
> Key: HIVE-14304
> URL: https://issues.apache.org/jira/browse/HIVE-14304
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.3.0, 2.2.0
>Reporter: Niklaus Xiao
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.2.0
>
> Attachments: HIVE-14304.1.patch
>
>
> Use beeline
> {code}
> beeline --entireLineAsCommand=true
> {code}
> show tables fail:
> {code}
> 0: jdbc:hive2://189.39.151.44:21066/> show tables;
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> extraneous input ';' expecting EOF near '' (state=42000,code=4)
> {code}
> We should remove the trailing semi-colon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao reassigned HIVE-14411:
---

Assignee: Ashutosh Chauhan  (was: Niklaus Xiao)

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Niklaus Xiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407591#comment-15407591
 ] 

Niklaus Xiao commented on HIVE-14411:
-

We should not apply NullScanOptimizer for non-native table.

cc [~ashutoshc] for code review.

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Niklaus Xiao
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao updated HIVE-14411:

Attachment: HIVE-14411.1.patch

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Niklaus Xiao
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao updated HIVE-14411:

Target Version/s: 2.2.0
  Status: Patch Available  (was: Open)

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Niklaus Xiao
> Attachments: HIVE-14411.1.patch
>
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14411) selecting Hive on Hbase table may cause FileNotFoundException

2016-08-04 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao reassigned HIVE-14411:
---

Assignee: Niklaus Xiao

> selecting Hive on Hbase table may cause FileNotFoundException
> -
>
> Key: HIVE-14411
> URL: https://issues.apache.org/jira/browse/HIVE-14411
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.3.0
>Reporter: Rudd Chen
>Assignee: Niklaus Xiao
>
> 1. create a Hbase table hbase_table
> 2. create a external Hive table test_table mapping to the hbase table 
> example: 
> create 'hbase_t' 
> ,{NAME=>'cf',COMPRESSION=>'snappy'},{NUMREGIONS=>15,SPLITALGO=>'HexStringSplit'}
> create external table hbase_t_hive(key1 string,cf_train string,cf_flight 
> string,cf_wbsw string,cf_wbxw string,cf_bgrz string,cf_bgtf string) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with 
> serdeproperties("hbase.columns.mapping"=":key,cf:train,cf:flight,cf:wbsw,cf:wbxw,cf:bgrz,cf:bgtf")
>  tblproperties("hbase.table.name"="hbase_t");
> create table test3 as select * from hbase_t_hive where 1=2;
> 
> if hive.optimize.null.scan=true, it will return an FileNotFoundException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14424) CLIRestoreTest failing in branch 2.1

2016-08-04 Thread Rajat Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajat Khandelwal updated HIVE-14424:

Attachment: HIVE-14424.1.patch

> CLIRestoreTest failing in branch 2.1
> 
>
> Key: HIVE-14424
> URL: https://issues.apache.org/jira/browse/HIVE-14424
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-14424.1.patch, HIVE-14424.patch
>
>
> {noformat}
> java.lang.RuntimeException: Error applying authorization policy on hive 
> configuration: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:113)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.getService(CLIServiceRestoreTest.java:48)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.(CLIServiceRestoreTest.java:28)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:195)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:244)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:241)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:836)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1602)
>   at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:126)
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:110)
>   ... 22 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:385)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:812)
>   ... 25 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:375)
>   ... 26 more
> {noformat}
> But is caused by HIVE-14221. Code changes are here: 
> https://github.com/apache/hive/commit/de5ae86ee70d9396d5cefc499507b5f31fecc916
> So the issue is that, in this patch, everywhere the class 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory
>  has been mentioned, except at one place. 

[jira] [Updated] (HIVE-14424) CLIRestoreTest failing in branch 2.1

2016-08-04 Thread Rajat Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajat Khandelwal updated HIVE-14424:

Attachment: (was: HIVE-14424.1.patch)

> CLIRestoreTest failing in branch 2.1
> 
>
> Key: HIVE-14424
> URL: https://issues.apache.org/jira/browse/HIVE-14424
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-14424.1.patch, HIVE-14424.patch
>
>
> {noformat}
> java.lang.RuntimeException: Error applying authorization policy on hive 
> configuration: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:113)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.getService(CLIServiceRestoreTest.java:48)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.(CLIServiceRestoreTest.java:28)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:195)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:244)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:241)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:836)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1602)
>   at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:126)
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:110)
>   ... 22 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:385)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:812)
>   ... 25 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:375)
>   ... 26 more
> {noformat}
> But is caused by HIVE-14221. Code changes are here: 
> https://github.com/apache/hive/commit/de5ae86ee70d9396d5cefc499507b5f31fecc916
> So the issue is that, in this patch, everywhere the class 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory
>  has been mentioned, except at 

[jira] [Updated] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14423:

Fix Version/s: 2.2.0
   Status: Patch Available  (was: Open)

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14423.1.patch
>
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407573#comment-15407573
 ] 

Rajesh Balamohan commented on HIVE-14423:
-

https://reviews.apache.org/r/50788/

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14423.1.patch
>
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14423:

Fix Version/s: (was: 2.2.0)

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14423.1.patch
>
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14423:

Target Version/s: 2.1.0

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14423.1.patch
>
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14423:

Attachment: HIVE-14423.1.patch

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14423.1.patch
>
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14423) S3: Fetching partition sizes from FS can be expensive when stats are not available in metastore

2016-08-04 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan reassigned HIVE-14423:
---

Assignee: Rajesh Balamohan

> S3: Fetching partition sizes from FS can be expensive when stats are not 
> available in metastore 
> 
>
> Key: HIVE-14423
> URL: https://issues.apache.org/jira/browse/HIVE-14423
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
>
> When partition stats are not available in metastore, it tries to get the file 
> sizes from FS.
> e.g
> {noformat}
> at 
> org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1487)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getFileSizeForPartitions(StatsUtils.java:598)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:235)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:144)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:132)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:126)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> {noformat}
> This can be quite expensive in some FS like S3. Especially when table is 
> partitioned (e.g TPC-DS store_sales which has 1000s of partitions), query can 
> spend 1000s of seconds just waiting for these information to be pulled in.
> Also, it would be good to remove FS.getContentSummary usage to find out file 
> sizes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12924) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_ppr_multi_distinct.q failure

2016-08-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407549#comment-15407549
 ] 

Hive QA commented on HIVE-12924:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786962/HIVE-12924.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10440 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryLifeTimeHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityMultiplePreemptionsSameHost2
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/761/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/761/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-761/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786962 - PreCommit-HIVE-MASTER-Build

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_ppr_multi_distinct.q failure
> 
>
> Key: HIVE-12924
> URL: https://issues.apache.org/jira/browse/HIVE-12924
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Vineet Garg
> Attachments: HIVE-12924.1.patch, HIVE-12924.2.patch, 
> HIVE-12924.3.patch
>
>
> {code}
> EXPLAIN EXTENDED
> FROM srcpart src
> INSERT OVERWRITE TABLE dest1
> SELECT substr(src.key,1,1), count(DISTINCT substr(src.value,5)), 
> concat(substr(src.key,1,1),sum(substr(src.value,5))), sum(DISTINCT 
> substr(src.value, 5)), count(DISTINCT src.value)
> WHERE src.ds = '2008-04-08'
> GROUP BY substr(src.key,1,1)
> {code}
> Ended Job = job_local968043618_0742 with errors
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14426) Extensive logging on info level in WebHCat

2016-08-04 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-14426:
--
Status: Patch Available  (was: Open)

> Extensive logging on info level in WebHCat
> --
>
> Key: HIVE-14426
> URL: https://issues.apache.org/jira/browse/HIVE-14426
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
> Attachments: HIVE-14426.patch
>
>
> There is an extensive logging in WebHCat at info level, and even some 
> sensitive information could be logged



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14426) Extensive logging on info level in WebHCat

2016-08-04 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-14426:
--
Attachment: HIVE-14426.patch

Move it to trace level

> Extensive logging on info level in WebHCat
> --
>
> Key: HIVE-14426
> URL: https://issues.apache.org/jira/browse/HIVE-14426
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
> Attachments: HIVE-14426.patch
>
>
> There is an extensive logging in WebHCat at info level, and even some 
> sensitive information could be logged



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14412) Add a timezone-aware timestamp

2016-08-04 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-14412:
--
Status: Patch Available  (was: Open)

> Add a timezone-aware timestamp
> --
>
> Key: HIVE-14412
> URL: https://issues.apache.org/jira/browse/HIVE-14412
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-14412.1.patch
>
>
> Java's Timestamp stores the time elapsed since the epoch. While it's by 
> itself unambiguous, ambiguity comes when we parse a string into timestamp, or 
> convert a timestamp to string, causing problems like HIVE-14305.
> To solve the issue, I think we should make timestamp aware of timezone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14412) Add a timezone-aware timestamp

2016-08-04 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-14412:
--
Attachment: HIVE-14412.1.patch

Upload a PoC patch for review and test.

> Add a timezone-aware timestamp
> --
>
> Key: HIVE-14412
> URL: https://issues.apache.org/jira/browse/HIVE-14412
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-14412.1.patch
>
>
> Java's Timestamp stores the time elapsed since the epoch. While it's by 
> itself unambiguous, ambiguity comes when we parse a string into timestamp, or 
> convert a timestamp to string, causing problems like HIVE-14305.
> To solve the issue, I think we should make timestamp aware of timezone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14424) CLIRestoreTest failing in branch 2.1

2016-08-04 Thread Rajat Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajat Khandelwal updated HIVE-14424:

Attachment: HIVE-14424.1.patch

> CLIRestoreTest failing in branch 2.1
> 
>
> Key: HIVE-14424
> URL: https://issues.apache.org/jira/browse/HIVE-14424
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-14424.1.patch, HIVE-14424.patch
>
>
> {noformat}
> java.lang.RuntimeException: Error applying authorization policy on hive 
> configuration: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:113)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.getService(CLIServiceRestoreTest.java:48)
>   at 
> org.apache.hive.service.cli.CLIServiceRestoreTest.(CLIServiceRestoreTest.java:28)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:195)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:244)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:241)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:836)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1602)
>   at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:126)
>   at org.apache.hive.service.cli.CLIService.init(CLIService.java:110)
>   ... 22 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:385)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:812)
>   ... 25 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:375)
>   ... 26 more
> {noformat}
> But is caused by HIVE-14221. Code changes are here: 
> https://github.com/apache/hive/commit/de5ae86ee70d9396d5cefc499507b5f31fecc916
> So the issue is that, in this patch, everywhere the class 
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory
>  has been mentioned, except at one place. 

[jira] [Commented] (HIVE-8339) Job status not found after 100% succeded map

2016-08-04 Thread liuguanghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407469#comment-15407469
 ] 

liuguanghua commented on HIVE-8339:
---

hello,everyone!
This problem maybe very easy to slove if we were met the same problem.
please see this
https://issues.apache.org/jira/browse/HIVE-14425

> Job status not found after 100% succeded map
> ---
>
> Key: HIVE-8339
> URL: https://issues.apache.org/jira/browse/HIVE-8339
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
> Environment: Hadoop 2.4.0, Hive 0.13.1.
> Amazon EMR cluster of 9 i2.4xlarge nodes.
> 800+GB of data in HDFS.
>Reporter: Valera Chevtaev
>Assignee: liuguanghua
>
> According to the logs it seems that the jobs 100% succeed for both map and 
> reduce but then wasn't able to get the status of the job from job history 
> server.
> Hive logs:
> 2014-10-03 07:57:26,593 INFO  [main]: exec.Task 
> (SessionState.java:printInfo(536)) - 2014-10-03 07:57:26,593 Stage-1 map = 
> 100%, reduce = 99%, Cumulative CPU 872541.02 sec
> 2014-10-03 07:57:47,447 INFO  [main]: exec.Task 
> (SessionState.java:printInfo(536)) - 2014-10-03 07:57:47,446 Stage-1 map = 
> 100%, reduce = 100%, Cumulative CPU 872566.55 sec
> 2014-10-03 07:57:48,710 INFO  [main]: mapred.ClientServiceDelegate 
> (ClientServiceDelegate.java:getProxy(273)) - Application state is completed. 
> FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
> 2014-10-03 07:57:48,716 ERROR [main]: exec.Task 
> (SessionState.java:printError(545)) - Ended Job = job_1412263771568_0002 with 
> exception 'java.io.IOException(Could not find status of 
> job:job_1412263771568_0002)'
> java.io.IOException: Could not find status of job:job_1412263771568_0002
>at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
>at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:547)
>at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426)
>at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
>at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503)
>at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270)
>at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088)
>at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
>at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
>at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:275)
>at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:227)
>at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:430)
>at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:366)
>at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:463)
>at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:479)
>at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:759)
>at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:697)
>at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:636)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:606)
>at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> 2014-10-03 07:57:48,763 ERROR [main]: ql.Driver 
> (SessionState.java:printError(545)) - FAILED: Execution Error, return code 1 
> from org.apache.hadoop.hive.ql.exec.mr.MapRedTask



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14426) Extensive logging on info level in WebHCat

2016-08-04 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-14426:
-

Assignee: Peter Vary

> Extensive logging on info level in WebHCat
> --
>
> Key: HIVE-14426
> URL: https://issues.apache.org/jira/browse/HIVE-14426
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
>
> There is an extensive logging in WebHCat at info level, and even some 
> sensitive information could be logged



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8339) Job status not found after 100% succeded map

2016-08-04 Thread liuguanghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua reassigned HIVE-8339:
-

Assignee: liuguanghua

> Job status not found after 100% succeded map
> ---
>
> Key: HIVE-8339
> URL: https://issues.apache.org/jira/browse/HIVE-8339
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
> Environment: Hadoop 2.4.0, Hive 0.13.1.
> Amazon EMR cluster of 9 i2.4xlarge nodes.
> 800+GB of data in HDFS.
>Reporter: Valera Chevtaev
>Assignee: liuguanghua
>
> According to the logs it seems that the jobs 100% succeed for both map and 
> reduce but then wasn't able to get the status of the job from job history 
> server.
> Hive logs:
> 2014-10-03 07:57:26,593 INFO  [main]: exec.Task 
> (SessionState.java:printInfo(536)) - 2014-10-03 07:57:26,593 Stage-1 map = 
> 100%, reduce = 99%, Cumulative CPU 872541.02 sec
> 2014-10-03 07:57:47,447 INFO  [main]: exec.Task 
> (SessionState.java:printInfo(536)) - 2014-10-03 07:57:47,446 Stage-1 map = 
> 100%, reduce = 100%, Cumulative CPU 872566.55 sec
> 2014-10-03 07:57:48,710 INFO  [main]: mapred.ClientServiceDelegate 
> (ClientServiceDelegate.java:getProxy(273)) - Application state is completed. 
> FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
> 2014-10-03 07:57:48,716 ERROR [main]: exec.Task 
> (SessionState.java:printError(545)) - Ended Job = job_1412263771568_0002 with 
> exception 'java.io.IOException(Could not find status of 
> job:job_1412263771568_0002)'
> java.io.IOException: Could not find status of job:job_1412263771568_0002
>at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
>at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:547)
>at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426)
>at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
>at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503)
>at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270)
>at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088)
>at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
>at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
>at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:275)
>at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:227)
>at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:430)
>at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:366)
>at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:463)
>at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:479)
>at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:759)
>at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:697)
>at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:636)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:606)
>at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> 2014-10-03 07:57:48,763 ERROR [main]: ql.Driver 
> (SessionState.java:printError(545)) - FAILED: Execution Error, return code 1 
> from org.apache.hadoop.hive.ql.exec.mr.MapRedTask



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14425) java.io.IOException: Could not find status of job:job_*

2016-08-04 Thread liuguanghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HIVE-14425:
---
Description: 
java.io.IOException: Could not find status of job:job_1470047186803_13
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:437)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Ended Job = job_1470047186803_13 with exception 'java.io.IOException(Could 
not find status of job:job_1470047186803_13)'


  was:
java.io.IOException: Could not find status of job:job_1469704050741_109320
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:437)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Ended Job = job_1469704050741_109320 with exception 'java.io.IOException(Could 
not find status of job:job_1469704050741_109320)'



> java.io.IOException: Could not find status of job:job_*
> ---
>
> Key: HIVE-14425
> URL: https://issues.apache.org/jira/browse/HIVE-14425
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: hadoop2.7.2 + hive1.2.1
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Minor
>
> java.io.IOException: Could not find status of job:job_1470047186803_13
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:437)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
> at 

[jira] [Resolved] (HIVE-14425) java.io.IOException: Could not find status of job:job_*

2016-08-04 Thread liuguanghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua resolved HIVE-14425.

Resolution: Fixed

I finally resolve this problem.
I find the error logs from the AMapplicationMaster:

2016-08-04 14:27:59,012 INFO [Thread-68] 
org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler 
failed in state STOPPED; cause: 
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException):
 The maximum path component name limit of 
job_1470047186803_13-1470292057380-ide-create++table+temp.tem...%28%27%E5%B7%B2%E5%8F%96%E6%B6%88%27%2C%27%E6%8B%92%E6%94%B6%E5%85%A5%E5%BA%93%27%2C%27%E9%A9%B3%E5%9B%9E%27%29%28Stage-1470292073175-1-0-SUCCEEDED-root.data_platform-1470292063756.jhist_tmp
 in directory /tmp/hadoop-yarn/staging/history/done_intermediate/ide is 
exceeded: limit=255 length=258
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxComponentLength(FSDirectory.java:911)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addLastINode(FSDirectory.java:976)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addINode(FSDirectory.java:838)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addFile(FSDirectory.java:426)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2575)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

  When the am finished ,it produce the two files,one is *.jhist and the 
other is *.conf.  If hql being with Chinese character,or end with Chinese  
character ,the part of the filename must be url encode. result to more long 
filename.So the job cannot find in the historyserver.
  
   how to slove this?
   There is a simple way.
set hive.jobname.length=10; or smaller (the default value is 50)


> java.io.IOException: Could not find status of job:job_*
> ---
>
> Key: HIVE-14425
> URL: https://issues.apache.org/jira/browse/HIVE-14425
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: hadoop2.7.2 + hive1.2.1
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Minor
>
> java.io.IOException: Could not find status of job:job_1469704050741_109320
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:437)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at 

[jira] [Commented] (HIVE-14374) BeeLine argument, and configuration handling cleanup

2016-08-04 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407458#comment-15407458
 ] 

Peter Vary commented on HIVE-14374:
---

Looked up from the code and jira the remaining attributes:
- entirelineascommand - required for SchemaTool, but not intended for public 
usage
- maxheight, maxwidth - was not able to find about the intended usage (do we 
need a command line argument for it, or not) - maxwidth specifically not saved, 
but loaded, and possible to set by a commandline argument. Maybe 
[~cwsteinbach], or [~ashutoshc] could tell more. Otherwise I think it would be 
a good idea to set it from command line.
- timeout - the code does not uses it, jira does not help. Maybe 
[~cwsteinbach], or [~ashutoshc] could tell more. Otherwise I think it would 
should remove it altogether.
- showelapsedtime - the jira does not help, but I think it would be a good idea 
to set it from command line.
- lastconnectedurl - looking through the jira (HIVE-13670), the author 
([~sushanth]) was avare of the "using reflection to set command line arguments" 
feature, but still not put into the documentation. So I think this is not 
intended as a command line argument. [~sushanth] please comment, if I 
missunderstand anything. Otherwise we should remove the command line setting 
possibility
- trimscripts - the jira does not help, but I think it would be a good idea to 
set it from command line.
- historyfile - the jira does not help, but I think it would be a good idea to 
set it from command line.

> BeeLine argument, and configuration handling cleanup
> 
>
> Key: HIVE-14374
> URL: https://issues.apache.org/jira/browse/HIVE-14374
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> BeeLine uses reflection, to set the BeeLineOpts attributes when parsing 
> command line arguments, and when loading the configuration file.
> This means, that creating a setXXX, getXXX method in BeeLineOpts is a 
> potential risk of exposing an attribute for the user unintentionally. There 
> is a possibility to exclude an attribute from saving the value in the 
> configuration file with the Ignore annotation. This does not restrict the 
> loading or command line setting of these parameters which means there are 
> many undocumented "features" as-is, like setting the lastConnectedUrl, 
> allowMultilineCommand, maxHeight, trimScripts, etc. from command line.
> This part of the code needs a little cleanup.
> I think we should make this exposure more explicit, and be able to 
> differentiate the configurable options depending on the source (command line, 
> and configuration file), so I propose to create a mechanism to tell 
> explicitly which BeeLineOpts attributes are settable by command line, and 
> configuration file, and every other attribute should be inaccessible by the 
> user of the beeline cli.
> One possible solution could be two annotations like these:
> - CommandLineOption - there could be a mandatory text parameter here, so the 
> developer had to provide the help text for it which could be displayed to the 
> user
> - ConfigurationFileOption - no text is required here
> Something like this:
> - This attribute could be provided by command line, and from a configuration 
> file too:
> {noformat}
> @CommandLineOption("automatically save preferences")
> @ConfigurationFileOption
> public void setAutosave(boolean autosave) {
>   this.autosave = autosave;
> }
> public void getAutosave() {
>   return this.autosave;
> }
> {noformat}
> - This attribute could be set through the configuration only
> {noformat}
> @ConfigurationFileOption
> public void setLastConnectedUrl(String lastConnectedUrl) {
>   this.lastConnectedUrl = lastConnectedUrl;

> }
> 

> public String getLastConnectedUrl()
> {

>   return lastConnectedUrl;
> 
}
> 
{noformat}
> - Attribute could be set through command line only - I think this is not too 
> relevant, but possible
> {noformat}
> @CommandLineOption("specific command line option")
> public void setSpecificCommandLineOption(String specificCommandLineOption) {
> 
  this.specificCommandLineOption = specificCommandLineOption;
> 
}
> 

> public String getSpecificCommandLineOption() {
> 
  return specificCommandLineOption;
> 
}
> 
{noformat}
> - Attribute could not be set
> {noformat}
> public static Env getEnv() {
> 
  return env;
> 
}
> 

public static void setEnv(Env envToUse) {
> 
  env = envToUse;
> 
}
> {noformat}
> Accouring to our previous conversations, I think you might be interested in: 
> [~spena], [~vihangk1], [~aihuaxu], [~ngangam], [~ychena], [~xuefuz]
> but anyone is welcome to discuss this.
> What do you think about the proposed solution?
> Any better ideas, or extensions?
> Thanks,
> Peter

[jira] [Updated] (HIVE-14425) java.io.IOException: Could not find status of job:job_*

2016-08-04 Thread liuguanghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HIVE-14425:
---
Summary: java.io.IOException: Could not find status of job:job_*  (was: 
java.io.IOException: Could not find status of job:job_1469704050741_109320)

> java.io.IOException: Could not find status of job:job_*
> ---
>
> Key: HIVE-14425
> URL: https://issues.apache.org/jira/browse/HIVE-14425
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: hadoop2.7.2 + hive1.2.1
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Minor
>
> java.io.IOException: Could not find status of job:job_1469704050741_109320
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:437)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Ended Job = job_1469704050741_109320 with exception 
> 'java.io.IOException(Could not find status of job:job_1469704050741_109320)'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14425) java.io.IOException: Could not find status of job:job_1469704050741_109320

2016-08-04 Thread liuguanghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua reassigned HIVE-14425:
--

Assignee: liuguanghua

> java.io.IOException: Could not find status of job:job_1469704050741_109320
> --
>
> Key: HIVE-14425
> URL: https://issues.apache.org/jira/browse/HIVE-14425
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: hadoop2.7.2 + hive1.2.1
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Minor
>
> java.io.IOException: Could not find status of job:job_1469704050741_109320
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:437)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Ended Job = job_1469704050741_109320 with exception 
> 'java.io.IOException(Could not find status of job:job_1469704050741_109320)'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13589) beeline - support prompt for password with '-u' option

2016-08-04 Thread Ke Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407437#comment-15407437
 ] 

Ke Jia commented on HIVE-13589:
---

Hi [~thejas], current patch works with the case specifying '- p' without an 
argument to make beeline prompt for password.  However it  only support the 
case with "- p" following by the specified option  which are added in the 
Beeline.java [L290-L391]. If not, Apache common CLI will consider the option as 
the value of "- p".  We can limit user to put the "- p" by the end of command 
line. Do you have better suggestions?

> beeline - support prompt for password with '-u' option
> --
>
> Key: HIVE-13589
> URL: https://issues.apache.org/jira/browse/HIVE-13589
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Thejas M Nair
>Assignee: Ke Jia
> Attachments: HIVE-13589.1.patch
>
>
> Specifying connection string using commandline options in beeline is 
> convenient, as it gets saved in shell command history, and it is easy to 
> retrieve it from there.
> However, specifying the password in command prompt is not secure as it gets 
> displayed on screen and saved in the history.
> It should be possible to specify '-p' without an argument to make beeline 
> prompt for password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14146) Column comments with "\n" character "corrupts" table metadata

2016-08-04 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-14146:
--
Attachment: HIVE-14146.10.patch

Several more testcase changes:
- create_with_constraints.q.out - printing column names in constrains was not 
indented correctly before - my patch incidentally solves this
- alter_table_invalidate_column_stats.q.out, columnstats_part_coltype.q.out - 
column headers was not indented appropriately
- alter_view_as_select_with_partition.q - forget to add to the patch last time 
-multiline select is indented

The other testcases were about printing null instead of empty string. The patch 
updated to print it correctly.

> Column comments with "\n" character "corrupts" table metadata
> -
>
> Key: HIVE-14146
> URL: https://issues.apache.org/jira/browse/HIVE-14146
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14146.10.patch, HIVE-14146.2.patch, 
> HIVE-14146.3.patch, HIVE-14146.4.patch, HIVE-14146.5.patch, 
> HIVE-14146.6.patch, HIVE-14146.7.patch, HIVE-14146.8.patch, 
> HIVE-14146.9.patch, HIVE-14146.patch
>
>
> Create a table with the following(noting the \n in the COMMENT):
> {noformat}
> CREATE TABLE commtest(first_nm string COMMENT 'Indicates First name\nof an 
> individual’);
> {noformat}
> Describe shows that now the metadata is messed up:
> {noformat}
> beeline> describe commtest;
> +---++---+--+
> | col_name  | data_type  |comment|
> +---++---+--+
> | first_nm | string   | Indicates First name  |
> | of an individual  | NULL   | NULL  |
> +---++---+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14384) CBO: Decimal constant folding is failing

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-14384:
--

Assignee: Jesus Camacho Rodriguez

> CBO: Decimal constant folding is failing
> 
>
> Key: HIVE-14384
> URL: https://issues.apache.org/jira/browse/HIVE-14384
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>
> {code}
> explain select sum(l_extendedprice * l_discount) as revenue from lineitem 
> where l_shipdate >= '1993-01-01' and l_shipdate < '1994-01-01' and l_discount 
> between 0.06 - 0.01 and 0.06 + 0.01 and l_quantity < 25;
> {code}
> Fails CBO because of constant folding errors.
> {code}
> 2016-07-29T17:15:50,921 ERROR [0b3f62eb-8a80-40cd-8cf2-60bc835191a8 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IllegalArgumentException: Decimal scale must be less than or equal 
> to precision
> at 
> org.apache.hadoop.hive.serde2.typeinfo.HiveDecimalUtils.validateParameter(HiveDecimalUtils.java:53)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo.(DecimalTypeInfo.java:36)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:157)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getDecimalTypeInfo(TypeInfoFactory.java:175)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitLiteral(ExprNodeConverter.java:259)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitLiteral(ExprNodeConverter.java:82)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at org.apache.calcite.rex.RexLiteral.accept(RexLiteral.java:657) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitCall(ExprNodeConverter.java:144)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitCall(ExprNodeConverter.java:82)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at org.apache.calcite.rex.RexCall.accept(RexCall.java:108) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.HiveRexExecutorImpl.reduce(HiveRexExecutorImpl.java:58)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveReduceExpressionsRule.reduceExpressionsInternal(HiveReduceExpressionsRule.java:376)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveReduceExpressionsRule.reduceExpressions(HiveReduceExpressionsRule.java:286)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] 
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(HiveReduceExpressionsRule.java:141)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:318)
>  ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:514) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:392) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:285)
>  ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:72)
>  ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:207) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:194) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.hepPlan(CalcitePlanner.java:1320)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:1191)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> 

[jira] [Commented] (HIVE-14384) CBO: Decimal constant folding is failing

2016-08-04 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407421#comment-15407421
 ] 

Jesus Camacho Rodriguez commented on HIVE-14384:


Error seems to be in Calcite, in the method {{makeExactLiteral(BigDecimal)}} in 
RexBuilder. Basically, it will infer the incorrect precision when decimal < 1 
e.g. for 0.06 it infers the type to be Decimal(1,2) instead of Decimal(3,2).

{code:java}
  /**
   * Creates a numeric literal.
   */
  public RexLiteral makeExactLiteral(BigDecimal bd) {
RelDataType relType;
int scale = bd.scale();
long l = bd.unscaledValue().longValue();
assert scale >= 0;
assert scale <= typeFactory.getTypeSystem().getMaxNumericScale() : scale;
assert BigDecimal.valueOf(l, scale).equals(bd);
if (scale == 0) {
  if ((l >= Integer.MIN_VALUE) && (l <= Integer.MAX_VALUE)) {
relType = typeFactory.createSqlType(SqlTypeName.INTEGER);
  } else {
relType = typeFactory.createSqlType(SqlTypeName.BIGINT);
  }
} else {
  int precision = bd.unscaledValue().abs().toString().length();
  relType =
  typeFactory.createSqlType(SqlTypeName.DECIMAL, precision, scale);
}
return makeExactLiteral(bd, relType);
  }
{code}

> CBO: Decimal constant folding is failing
> 
>
> Key: HIVE-14384
> URL: https://issues.apache.org/jira/browse/HIVE-14384
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.2.0
>Reporter: Gopal V
>
> {code}
> explain select sum(l_extendedprice * l_discount) as revenue from lineitem 
> where l_shipdate >= '1993-01-01' and l_shipdate < '1994-01-01' and l_discount 
> between 0.06 - 0.01 and 0.06 + 0.01 and l_quantity < 25;
> {code}
> Fails CBO because of constant folding errors.
> {code}
> 2016-07-29T17:15:50,921 ERROR [0b3f62eb-8a80-40cd-8cf2-60bc835191a8 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> java.lang.IllegalArgumentException: Decimal scale must be less than or equal 
> to precision
> at 
> org.apache.hadoop.hive.serde2.typeinfo.HiveDecimalUtils.validateParameter(HiveDecimalUtils.java:53)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo.(DecimalTypeInfo.java:36)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:157)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getDecimalTypeInfo(TypeInfoFactory.java:175)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitLiteral(ExprNodeConverter.java:259)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitLiteral(ExprNodeConverter.java:82)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at org.apache.calcite.rex.RexLiteral.accept(RexLiteral.java:657) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitCall(ExprNodeConverter.java:144)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.ExprNodeConverter.visitCall(ExprNodeConverter.java:82)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at org.apache.calcite.rex.RexCall.accept(RexCall.java:108) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.HiveRexExecutorImpl.reduce(HiveRexExecutorImpl.java:58)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveReduceExpressionsRule.reduceExpressionsInternal(HiveReduceExpressionsRule.java:376)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveReduceExpressionsRule.reduceExpressions(HiveReduceExpressionsRule.java:286)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] 
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(HiveReduceExpressionsRule.java:141)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:318)
>  ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:514) 
> ~[calcite-core-1.6.0.jar:1.6.0]
> at 
> 

  1   2   >