[jira] [Commented] (HIVE-12981) ThriftCLIService uses incompatible getShortName() implementation

2016-02-10 Thread Bolke de Bruin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140858#comment-15140858
 ] 

Bolke de Bruin commented on HIVE-12981:
---

Any feedback on this?

> ThriftCLIService uses incompatible getShortName() implementation
> 
>
> Key: HIVE-12981
> URL: https://issues.apache.org/jira/browse/HIVE-12981
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication, Authorization, CLI, Security
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Bolke de Bruin
>Assignee: Thejas M Nair
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HIVE-12981-Use-KerberosName.patch
>
>
> ThriftCLIService has a local implementation getShortName() that assumes a 
> short name is always the part before "@" and "/". This is not always the case 
> as Kerberos Rules (from Hadoop's KerberosName) might actually transform a 
> name to something else.
> Considering a pending change to getShortName() (#HADOOP-12751) and the normal 
> use of KerberosName in other parts of Hive it only seems logical to use the 
> standard implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13000) Hive returns useless parsing error

2016-02-10 Thread Alina Abramova (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alina Abramova updated HIVE-13000:
--
Attachment: HIVE-13000.3.patch

> Hive returns useless parsing error 
> ---
>
> Key: HIVE-13000
> URL: https://issues.apache.org/jira/browse/HIVE-13000
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.0.0, 1.2.1
>Reporter: Alina Abramova
>Assignee: Alina Abramova
>Priority: Minor
> Attachments: HIVE-13000.1.patch, HIVE-13000.2.patch, 
> HIVE-13000.3.patch
>
>
> When I run query like these I receive unclear exception
> hive> SELECT record FROM ctest GROUP BY record.instance_id;
> FAILED: SemanticException Error in parsing 
> It will be clearer if it would be like:
> hive> SELECT record FROM ctest GROUP BY record.instance_id;
> FAILED: SemanticException  Expression not in GROUP BY key record



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol

2016-02-10 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140964#comment-15140964
 ] 

Naveen Gangam commented on HIVE-11866:
--

[~thejas] Could you review the patch and confirm that everything looks good 
from a legal perspective. I am using Apache Directory Service this time. Thanks

> Add framework to enable testing using LDAPServer using LDAP protocol
> 
>
> Key: HIVE-11866
> URL: https://issues.apache.org/jira/browse/HIVE-11866
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, 
> HIVE-11866.4.patch, HIVE-11866.patch
>
>
> Currently there is no unit test coverage for HS2's LDAP Atn provider using a 
> LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with 
> some realistic usecases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12158) Add methods to HCatClient for partition synchronization

2016-02-10 Thread David Maughan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141031#comment-15141031
 ] 

David Maughan commented on HIVE-12158:
--

Hi [~sushanth], are you able to advise how to move this ticket along?

> Add methods to HCatClient for partition synchronization
> ---
>
> Key: HIVE-12158
> URL: https://issues.apache.org/jira/browse/HIVE-12158
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.0.0
>Reporter: David Maughan
>Assignee: David Maughan
>Priority: Minor
>  Labels: hcatalog
> Attachments: HIVE-12158.1.patch
>
>
> We have a use case where we have a list of partitions that are created as a 
> result of a batch job (new or updated) outside of Hive and would like to 
> synchronize them with the Hive MetaStore. We would like to use the HCatalog 
> {{HCatClient}} but it currently does not seem to support this. However it is 
> possible with the {{HiveMetaStoreClient}} directly. I am proposing to add the 
> following method to {{HCatClient}} and {{HCatClientHMSImpl}}:
> A method for altering partitions. The implementation would delegate to 
> {{HiveMetaStoreClient#alter_partitions}}. I've used "update" instead of 
> "alter" in the name so it's consistent with the 
> {{HCatClient#updateTableSchema}} method.
> {code}
> public void updatePartitions(List partitions) throws 
> HCatException
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13017) Child process of HiveServer2 fails to get delegation token from non default FileSystem

2016-02-10 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13017:

Attachment: HIVE-13017.2.patch

Updated patch to make sure it polls default fs as well.

> Child process of HiveServer2 fails to get delegation token from non default 
> FileSystem
> --
>
> Key: HIVE-13017
> URL: https://issues.apache.org/jira/browse/HIVE-13017
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 1.2.1
> Environment: Secure 
>Reporter: Takahiko Saito
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13017.2.patch, HIVE-13017.patch
>
>
> The following query fails, when Azure Filesystem is used as default file 
> system, and HDFS is used for intermediate data.
> {noformat}
> >>>  create temporary table s10k stored as orc as select * from studenttab10k;
> >>>  create temporary table v10k as select * from votertab10k;
> >>>  select registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;
> ERROR : Execution failed with exit status: 2
> ERROR : Obtaining error information
> ERROR : 
> Task failed!
> Task ID:
>   Stage-5
> Logs:
> ERROR : /var/log/hive/hiveServer2.log
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=2)
> Aborting command set because "force" is false and command failed: "select 
> registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;"
> Closing: 0: 
> jdbc:hive2://zk2-hs21-h.hdinsight.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@hdinsight.net;transportMode=http;httpPath=cliservice
> hiveServer2.log shows:
> 2016-02-02 18:04:34,182 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,199 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,212 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(127)) - Could not 
> validate cookie sent, will try to generate a new cookie
> 2016-02-02 18:04:34,213 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:checkConcurrency(168)) - Concurrency mode is disabled, 
> not creating a lock manager
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doKerberosAuth(352)) - 
> Failed to authenticate with http/_HOST kerberos principal, trying with 
> hive/_HOST kerberos principal
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,225 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1390)) - Setting caller context to query id 
> hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0
> 2016-02-02 18:04:34,226 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1393)) - Starting 
> command(queryId=hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0): 
> select registration
> from s10k s join v10k v
> on (s.name = v.name) join studentparttab30k p
> on (p.name = v.name)
> where s.age < 25 and v.age < 25 and p.age < 25
> 2016-02-02 18:04:34,228 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> hooks.ATSHook (ATSHook.java:(90)) - Created ATS Hook
> 2016-02-02 18:04:34,229 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook 
> from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,237 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(169)) - Cookie added 
> for clientUserName hrt_qa
> 2016-02-02 18:04:34,238 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1454436274229 
> end=1454436274238 duration=9 from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,239 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  

[jira] [Commented] (HIVE-12237) Use slf4j as logging facade

2016-02-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141466#comment-15141466
 ] 

Prasanth Jayachandran commented on HIVE-12237:
--

[~cartershanklin] This is similar to HIVE-12402 but for HS2. Unfortunately, CLI 
and HS2 uses different command line options processor. HIVE-12402 provided 
compat option for CLI but I missed it for HS2. The workaround is to separate 
the logger and level like below
{code}
hive --service hiveserver2 --hiveconf hive.root.logger=console --hiveconf 
hive.log.level=DEBUG
{code}

> Use slf4j as logging facade
> ---
>
> Key: HIVE-12237
> URL: https://issues.apache.org/jira/browse/HIVE-12237
> Project: Hive
>  Issue Type: Task
>  Components: Logging
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 2.0.0
>
> Attachments: HIVE-12237.1.patch, HIVE-12237.2.patch, 
> HIVE-12237.3.patch, HIVE-12237.4.patch, HIVE-12237.5.patch, 
> HIVE-12237.6.patch, HIVE-12237.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13017) Child process of HiveServer2 fails to get delegation token from non default FileSystem

2016-02-10 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13017:

Attachment: HIVE-13017.patch

Patch attached.

> Child process of HiveServer2 fails to get delegation token from non default 
> FileSystem
> --
>
> Key: HIVE-13017
> URL: https://issues.apache.org/jira/browse/HIVE-13017
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 1.2.1
> Environment: Secure 
>Reporter: Takahiko Saito
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13017.patch
>
>
> The following query fails, when Azure Filesystem is used as default file 
> system, and HDFS is used for intermediate data.
> {noformat}
> >>>  create temporary table s10k stored as orc as select * from studenttab10k;
> >>>  create temporary table v10k as select * from votertab10k;
> >>>  select registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;
> ERROR : Execution failed with exit status: 2
> ERROR : Obtaining error information
> ERROR : 
> Task failed!
> Task ID:
>   Stage-5
> Logs:
> ERROR : /var/log/hive/hiveServer2.log
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=2)
> Aborting command set because "force" is false and command failed: "select 
> registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;"
> Closing: 0: 
> jdbc:hive2://zk2-hs21-h.hdinsight.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@hdinsight.net;transportMode=http;httpPath=cliservice
> hiveServer2.log shows:
> 2016-02-02 18:04:34,182 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,199 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,212 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(127)) - Could not 
> validate cookie sent, will try to generate a new cookie
> 2016-02-02 18:04:34,213 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:checkConcurrency(168)) - Concurrency mode is disabled, 
> not creating a lock manager
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doKerberosAuth(352)) - 
> Failed to authenticate with http/_HOST kerberos principal, trying with 
> hive/_HOST kerberos principal
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,225 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1390)) - Setting caller context to query id 
> hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0
> 2016-02-02 18:04:34,226 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1393)) - Starting 
> command(queryId=hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0): 
> select registration
> from s10k s join v10k v
> on (s.name = v.name) join studentparttab30k p
> on (p.name = v.name)
> where s.age < 25 and v.age < 25 and p.age < 25
> 2016-02-02 18:04:34,228 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> hooks.ATSHook (ATSHook.java:(90)) - Created ATS Hook
> 2016-02-02 18:04:34,229 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook 
> from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,237 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(169)) - Cookie added 
> for clientUserName hrt_qa
> 2016-02-02 18:04:34,238 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1454436274229 
> end=1454436274238 duration=9 from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,239 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.security.authorization.plugin.DisallowTransformHook
>  from=org.apache.hadoop.hive.ql.Driver>
> 

[jira] [Commented] (HIVE-1608) use sequencefile as the default for storing intermediate results

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141457#comment-15141457
 ] 

Hive QA commented on HIVE-1608:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787126/HIVE-1608.5.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6934/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6934/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6934/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6934/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2663f49 HIVE-12987: Add metrics for HS2 active users and SQL 
operations(Jimmy, reviewed by Szehon, Aihua)
+ git clean -f -d
Removing 
hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java.orig
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 2663f49 HIVE-12987: Add metrics for HS2 active users and SQL 
operations(Jimmy, reviewed by Szehon, Aihua)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787126 - PreCommit-HIVE-TRUNK-Build

> use sequencefile as the default for storing intermediate results
> 
>
> Key: HIVE-1608
> URL: https://issues.apache.org/jira/browse/HIVE-1608
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.7.0
>Reporter: Namit Jain
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-1608.1.patch, HIVE-1608.2.patch, HIVE-1608.3.patch, 
> HIVE-1608.4.patch, HIVE-1608.5.patch, HIVE-1608.patch
>
>
> The only argument for having a text file for storing intermediate results 
> seems to be better debuggability.
> But, tailing a sequence file is possible, and it should be more space 
> efficient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13030) Javadocs issue: Hive HCatalog build failed with IBM JDK 1.8 during Maven release

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141455#comment-15141455
 ] 

Hive QA commented on HIVE-13030:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787060/HIVE-13030.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9753 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6933/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6933/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6933/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787060 - PreCommit-HIVE-TRUNK-Build

> Javadocs issue: Hive HCatalog build failed with IBM JDK 1.8 during Maven 
> release
> 
>
> Key: HIVE-13030
> URL: https://issues.apache.org/jira/browse/HIVE-13030
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Hive, WebHCat
>Affects Versions: 1.2.1
> Environment: Hive 1.2.1 + IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: HCatlog, Hive, IBM, Java, WebHCat, build, javadocs, 
> maven, release
> Fix For: 1.2.1
>
> Attachments: HIVE-13030.patch, hive_build_javadocs_errors.txt
>
>
> When building Hive with IBM JDK1.8, the maven release build is failing 
> because of missing javadocs in Hive HCatolog webhcat  module.
> All the errors are related to missing javadocs-
> 10:55:17 [INFO] [INFO] Hive HCatalog Webhcat . 
> FAILURE [12.229s]
> 10:55:17 [INFO] [INFO] Hive HCatalog Streaming ... 
> SKIPPED
> 10:55:17 [INFO] [INFO] Hive HWI .. 
> SKIPPED
> 10:55:17 [INFO] [INFO] Hive ODBC . 
> SKIPPED
> 10:55:17 [INFO] [INFO] Hive Shims Aggregator . 
> SKIPPED
> 10:55:17 [INFO] [INFO] Hive TestUtils  
> SKIPPED
> 10:55:17 [INFO] [INFO] Hive Packaging  
> SKIPPED
> 10:55:17 [INFO] [INFO] 
> 
> 10:55:17 [INFO] [INFO] BUILD FAILURE
> 10:55:17 [INFO] [INFO] 
> 
> 10:55:17 [INFO] [INFO] Total time: 4:10.477s
> 10:55:17 [INFO] [INFO] Finished at: Wed Feb 03 10:55:18 PST 2016
> 10:55:17 [INFO] [INFO] Final Memory: 79M/377M
> 10:55:17 [INFO] [INFO] 
> 
> 10:55:17 [INFO] [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.4:jar (attach-javadocs) on 
> project hive-webhcat: Error while creating archive:Exit code: 1 - 
> /a/workspace//hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java:353:
>  warning: no @return
> 10:55:17 [INFO] [ERROR] public Collection hiveProps() {
> 10:55:17 [INFO] [ERROR] ^
> .
> .
> .
> .
> .
> There are lots of such errors coming in HCatlog package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12158) Add methods to HCatClient for partition synchronization

2016-02-10 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141322#comment-15141322
 ] 

Sushanth Sowmyan commented on HIVE-12158:
-

Hi,

The patch looks reasonable at the face of it. I'll be able to review this in 
detail by this weekend to get back to you.

In the meanwhile, [~mithun], do you want to look at/review this patch?

> Add methods to HCatClient for partition synchronization
> ---
>
> Key: HIVE-12158
> URL: https://issues.apache.org/jira/browse/HIVE-12158
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.0.0
>Reporter: David Maughan
>Assignee: David Maughan
>Priority: Minor
>  Labels: hcatalog
> Attachments: HIVE-12158.1.patch
>
>
> We have a use case where we have a list of partitions that are created as a 
> result of a batch job (new or updated) outside of Hive and would like to 
> synchronize them with the Hive MetaStore. We would like to use the HCatalog 
> {{HCatClient}} but it currently does not seem to support this. However it is 
> possible with the {{HiveMetaStoreClient}} directly. I am proposing to add the 
> following method to {{HCatClient}} and {{HCatClientHMSImpl}}:
> A method for altering partitions. The implementation would delegate to 
> {{HiveMetaStoreClient#alter_partitions}}. I've used "update" instead of 
> "alter" in the name so it's consistent with the 
> {{HCatClient#updateTableSchema}} method.
> {code}
> public void updatePartitions(List partitions) throws 
> HCatException
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12981) ThriftCLIService uses incompatible getShortName() implementation

2016-02-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12981:

Attachment: HIVE-12981.patch

I am re-attaching the same patch with a different name so that HiveQA would 
pick it and run the tests.

> ThriftCLIService uses incompatible getShortName() implementation
> 
>
> Key: HIVE-12981
> URL: https://issues.apache.org/jira/browse/HIVE-12981
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication, Authorization, CLI, Security
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Bolke de Bruin
>Assignee: Thejas M Nair
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HIVE-12981-Use-KerberosName.patch, HIVE-12981.patch
>
>
> ThriftCLIService has a local implementation getShortName() that assumes a 
> short name is always the part before "@" and "/". This is not always the case 
> as Kerberos Rules (from Hadoop's KerberosName) might actually transform a 
> name to something else.
> Considering a pending change to getShortName() (#HADOOP-12751) and the normal 
> use of KerberosName in other parts of Hive it only seems logical to use the 
> standard implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13034) Add jdeb plugin to build debian

2016-02-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141504#comment-15141504
 ] 

Prasanth Jayachandran commented on HIVE-13034:
--

Hive needs JDK7 as the minimum required version.

"Depends: sun-java6-jre"

Should this be updated accordingly?



> Add jdeb plugin to build debian
> ---
>
> Key: HIVE-13034
> URL: https://issues.apache.org/jira/browse/HIVE-13034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Arshad Matin
>Assignee: Arshad Matin
>
> It would be nice to also generate a debian as a part of build. This can be 
> done by adding jdeb plugin to dist profile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13035) Enable Hive Server 2 to use a LDAP user and group search filters (RFC 2254).

2016-02-10 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141392#comment-15141392
 ] 

Naveen Gangam commented on HIVE-13035:
--

This would require us to use a separate bind DN than the user being 
authenticated. So the LDAP bind occurs with a a specific user everytime and the 
authenticating users will be found using a ldap search based on configurable 
keys.
This is probably a better approach the Atn provider is a service with the same 
lifecycle as the hive server2. However, this requires additional configuration 
that includes adding a password value(password for the bind user) to an 
external system like LDAP in the hive-site.xml. This concerns me.

> Enable Hive Server 2 to use a LDAP user and group search filters (RFC 2254).
> 
>
> Key: HIVE-13035
> URL: https://issues.apache.org/jira/browse/HIVE-13035
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2
>Affects Versions: 1.2.1
>Reporter: Robert Justice
>Assignee: Vaibhav Gumashta
>  Labels: feature
>
> In some AD configurations, user's may wish to authenticate with a attribute 
> other than sAMAccountName such as uid=, which may not match and cause 
> confusion.   If LDAP user and group search filters existed, (e.g. (uid={0})) 
> this would allow for such configurations.
> https://www.rfc-editor.org/rfc/rfc2254.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13033) SPDO unnecessarily duplicates columns in key & value of mapper output

2016-02-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13033:

Attachment: (was: HIVE-13033.patch)

> SPDO unnecessarily duplicates columns in key & value of mapper output
> -
>
> Key: HIVE-13033
> URL: https://issues.apache.org/jira/browse/HIVE-13033
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13036) Split hive.root.logger separately to make it compatible with log4j1.x (for remaining services)

2016-02-10 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141626#comment-15141626
 ] 

Sergey Shelukhin commented on HIVE-13036:
-

+1

> Split hive.root.logger separately to make it compatible with log4j1.x (for 
> remaining services)
> --
>
> Key: HIVE-13036
> URL: https://issues.apache.org/jira/browse/HIVE-13036
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13036.1.patch
>
>
> Similar to HIVE-12402 but for HS2 and metastore this time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13020) Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK

2016-02-10 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141569#comment-15141569
 ] 

Gopal V commented on HIVE-13020:


[~thejas]: ping?

> Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK
> --
>
> Key: HIVE-13020
> URL: https://issues.apache.org/jira/browse/HIVE-13020
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Metastore, Shims
>Affects Versions: 1.2.0, 1.3.0, 1.2.1
> Environment: Linux X86_64 and IBM JDK 8
>Reporter: Greg Senia
>Assignee: Greg Senia
>  Labels: hdp, ibm, ibm-jdk
> Attachments: HIVE-13020.patch, hivemetastore_afterpatch.txt, 
> hivemetastore_beforepatch.txt, hiveserver2_afterpatch.txt, 
> hiveserver2_beforepatch.txt
>
>
> HiveServer2 and Hive Metastore Zookeeper component is hardcoded to only 
> support the Oracle/Open JDK. I was performing testing of Hadoop running on 
> the IBM JDK and discovered this issue and have since drawn up the attached 
> patch. This looks to resolve the issue in a similar manner as how the Hadoop 
> core folks handle the IBM JDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13036) Split hive.root.logger separately to make it compatible with log4j1.x (for remaining services)

2016-02-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141621#comment-15141621
 ] 

Prasanth Jayachandran commented on HIVE-13036:
--

[~sershe] Could you please take a look?

> Split hive.root.logger separately to make it compatible with log4j1.x (for 
> remaining services)
> --
>
> Key: HIVE-13036
> URL: https://issues.apache.org/jira/browse/HIVE-13036
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13036.1.patch
>
>
> Similar to HIVE-12402 but for HS2 and metastore this time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12237) Use slf4j as logging facade

2016-02-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141624#comment-15141624
 ] 

Prasanth Jayachandran commented on HIVE-12237:
--

Created HIVE-13036 for fixing it

> Use slf4j as logging facade
> ---
>
> Key: HIVE-12237
> URL: https://issues.apache.org/jira/browse/HIVE-12237
> Project: Hive
>  Issue Type: Task
>  Components: Logging
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 2.0.0
>
> Attachments: HIVE-12237.1.patch, HIVE-12237.2.patch, 
> HIVE-12237.3.patch, HIVE-12237.4.patch, HIVE-12237.5.patch, 
> HIVE-12237.6.patch, HIVE-12237.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13039) BETWEEN predicate is not functioning correctly with predicate pushdown on Parquet table

2016-02-10 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-13039:

Description: 
BETWEEN becomes exclusive in parquet table when predicate pushdown is on (as it 
is by default in newer Hive versions). To reproduce(in a cluster, not local 
setup):
CREATE TABLE parquet_tbl(
  key int,
  ldate string)
 PARTITIONED BY (
 lyear string )
 ROW FORMAT SERDE
 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
 STORED AS INPUTFORMAT
 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
 OUTPUTFORMAT
 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';

insert overwrite table parquet_tbl partition (lyear='2016') select
  1,
  '2016-02-03' from src limit 1;

set hive.optimize.ppd.storage = true;
set hive.optimize.ppd = true;
select * from parquet_tbl where ldate between '2016-02-03' and '2016-02-03';

No row will be returned in a cluster.
But if you turn off hive.optimize.ppd, one row will be returned.

  was:
BETWEEN becomes exclusive in parquet table when predicate pushdown is on (as it 
is by default in newer Hive versions). To reproduce(in a cluster, not local 
setup):
CREATE TABLE parquet_tbl(
  key int,
  ldate string)
 PARTITIONED BY (
 lyear string )
 ROW FORMAT SERDE
 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
 STORED AS INPUTFORMAT
 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
 OUTPUTFORMAT
 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';

insert overwrite table parquet_tbl partition (lyear='2016') select
  1,
  '2016-02-03' from src limit 1;

set hive.optimize.ppd.storage = true;
set hive.optimize.ppd = true;
select * from parquet_tbl where ldate between '2016-02-03' and '2016-02-03';




> BETWEEN predicate is not functioning correctly with predicate pushdown on 
> Parquet table
> ---
>
> Key: HIVE-13039
> URL: https://issues.apache.org/jira/browse/HIVE-13039
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>
> BETWEEN becomes exclusive in parquet table when predicate pushdown is on (as 
> it is by default in newer Hive versions). To reproduce(in a cluster, not 
> local setup):
> CREATE TABLE parquet_tbl(
>   key int,
>   ldate string)
>  PARTITIONED BY (
>  lyear string )
>  ROW FORMAT SERDE
>  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
>  STORED AS INPUTFORMAT
>  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
>  OUTPUTFORMAT
>  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> insert overwrite table parquet_tbl partition (lyear='2016') select
>   1,
>   '2016-02-03' from src limit 1;
> set hive.optimize.ppd.storage = true;
> set hive.optimize.ppd = true;
> select * from parquet_tbl where ldate between '2016-02-03' and '2016-02-03';
> No row will be returned in a cluster.
> But if you turn off hive.optimize.ppd, one row will be returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13033) SPDO unnecessarily duplicates columns in key & value of mapper output

2016-02-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13033:

Attachment: HIVE-13033.1.patch

> SPDO unnecessarily duplicates columns in key & value of mapper output
> -
>
> Key: HIVE-13033
> URL: https://issues.apache.org/jira/browse/HIVE-13033
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13033.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13039) BETWEEN predicate is not functioning correctly with predicate pushdown on Parquet table

2016-02-10 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-13039:

Attachment: HIVE-13039.1.patch

> BETWEEN predicate is not functioning correctly with predicate pushdown on 
> Parquet table
> ---
>
> Key: HIVE-13039
> URL: https://issues.apache.org/jira/browse/HIVE-13039
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-13039.1.patch
>
>
> BETWEEN becomes exclusive in parquet table when predicate pushdown is on (as 
> it is by default in newer Hive versions). To reproduce(in a cluster, not 
> local setup):
> CREATE TABLE parquet_tbl(
>   key int,
>   ldate string)
>  PARTITIONED BY (
>  lyear string )
>  ROW FORMAT SERDE
>  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
>  STORED AS INPUTFORMAT
>  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
>  OUTPUTFORMAT
>  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> insert overwrite table parquet_tbl partition (lyear='2016') select
>   1,
>   '2016-02-03' from src limit 1;
> set hive.optimize.ppd.storage = true;
> set hive.optimize.ppd = true;
> select * from parquet_tbl where ldate between '2016-02-03' and '2016-02-03';
> No row will be returned in a cluster.
> But if you turn off hive.optimize.ppd, one row will be returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13036) Split hive.root.logger separately to make it compatible with log4j1.x (for remaining services)

2016-02-10 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13036:
-
Attachment: HIVE-13036.1.patch

> Split hive.root.logger separately to make it compatible with log4j1.x (for 
> remaining services)
> --
>
> Key: HIVE-13036
> URL: https://issues.apache.org/jira/browse/HIVE-13036
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13036.1.patch
>
>
> Similar to HIVE-12402 but for HS2 and metastore this time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13038) LLAP needs service class registration for token identifier

2016-02-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13038:

Attachment: HIVE-13038.patch

[~sseth] [~gopalv] can you take a look? 

> LLAP needs service class registration for token identifier
> --
>
> Key: HIVE-13038
> URL: https://issues.apache.org/jira/browse/HIVE-13038
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13038.patch
>
>
> I saw some warnings in a failed security test. Whether they have any effect 
> is an open question, since the test failure is not systematic (it passes or 
> fails depending on some seemingly unrelated configuration), but I'm going fix 
> it anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12988) Improve dynamic partition loading IV

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141784#comment-15141784
 ] 

Hive QA commented on HIVE-12988:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787090/HIVE-12988.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9753 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_load_data_to_encrypted_tables
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6935/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6935/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6935/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787090 - PreCommit-HIVE-TRUNK-Build

> Improve dynamic partition loading IV
> 
>
> Key: HIVE-12988
> URL: https://issues.apache.org/jira/browse/HIVE-12988
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12988.2.patch, HIVE-12988.2.patch, 
> HIVE-12988.3.patch, HIVE-12988.patch
>
>
> Parallelize copyFiles()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13020) Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK

2016-02-10 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141818#comment-15141818
 ] 

Thejas M Nair commented on HIVE-13020:
--

+1

Thanks for the patch [~gss2002]!


> Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK
> --
>
> Key: HIVE-13020
> URL: https://issues.apache.org/jira/browse/HIVE-13020
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Metastore, Shims
>Affects Versions: 1.2.0, 1.3.0, 1.2.1
> Environment: Linux X86_64 and IBM JDK 8
>Reporter: Greg Senia
>Assignee: Greg Senia
>  Labels: hdp, ibm, ibm-jdk
> Attachments: HIVE-13020.patch, hivemetastore_afterpatch.txt, 
> hivemetastore_beforepatch.txt, hiveserver2_afterpatch.txt, 
> hiveserver2_beforepatch.txt
>
>
> HiveServer2 and Hive Metastore Zookeeper component is hardcoded to only 
> support the Oracle/Open JDK. I was performing testing of Hadoop running on 
> the IBM JDK and discovered this issue and have since drawn up the attached 
> patch. This looks to resolve the issue in a similar manner as how the Hadoop 
> core folks handle the IBM JDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13017) Child process of HiveServer2 fails to get delegation token from non default FileSystem

2016-02-10 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141854#comment-15141854
 ] 

Thejas M Nair commented on HIVE-13017:
--

+1 

Can you also please run the new lines through a formatter ? Some nits like 
space missing after 'for', and after ',' .


> Child process of HiveServer2 fails to get delegation token from non default 
> FileSystem
> --
>
> Key: HIVE-13017
> URL: https://issues.apache.org/jira/browse/HIVE-13017
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 1.2.1
> Environment: Secure 
>Reporter: Takahiko Saito
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13017.2.patch, HIVE-13017.patch
>
>
> The following query fails, when Azure Filesystem is used as default file 
> system, and HDFS is used for intermediate data.
> {noformat}
> >>>  create temporary table s10k stored as orc as select * from studenttab10k;
> >>>  create temporary table v10k as select * from votertab10k;
> >>>  select registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;
> ERROR : Execution failed with exit status: 2
> ERROR : Obtaining error information
> ERROR : 
> Task failed!
> Task ID:
>   Stage-5
> Logs:
> ERROR : /var/log/hive/hiveServer2.log
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=2)
> Aborting command set because "force" is false and command failed: "select 
> registration 
> from s10k s join v10k v 
> on (s.name = v.name) join studentparttab30k p 
> on (p.name = v.name) 
> where s.age < 25 and v.age < 25 and p.age < 25;"
> Closing: 0: 
> jdbc:hive2://zk2-hs21-h.hdinsight.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_h...@hdinsight.net;transportMode=http;httpPath=cliservice
> hiveServer2.log shows:
> 2016-02-02 18:04:34,182 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,199 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,212 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(127)) - Could not 
> validate cookie sent, will try to generate a new cookie
> 2016-02-02 18:04:34,213 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:checkConcurrency(168)) - Concurrency mode is disabled, 
> not creating a lock manager
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doKerberosAuth(352)) - 
> Failed to authenticate with http/_HOST kerberos principal, trying with 
> hive/_HOST kerberos principal
> 2016-02-02 18:04:34,219 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,225 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1390)) - Setting caller context to query id 
> hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0
> 2016-02-02 18:04:34,226 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> ql.Driver (Driver.java:execute(1393)) - Starting 
> command(queryId=hive_20160202180429_76ab-64d6-4c89-88b0-6355cc5acbd0): 
> select registration
> from s10k s join v10k v
> on (s.name = v.name) join studentparttab30k p
> on (p.name = v.name)
> where s.age < 25 and v.age < 25 and p.age < 25
> 2016-02-02 18:04:34,228 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> hooks.ATSHook (ATSHook.java:(90)) - Created ATS Hook
> 2016-02-02 18:04:34,229 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook 
> from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,237 INFO  [HiveServer2-HttpHandler-Pool: Thread-55]: 
> thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(169)) - Cookie added 
> for clientUserName hrt_qa
> 2016-02-02 18:04:34,238 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) -  method=PreHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1454436274229 
> end=1454436274238 duration=9 from=org.apache.hadoop.hive.ql.Driver>
> 2016-02-02 18:04:34,239 INFO  [HiveServer2-Background-Pool: Thread-517]: 
> log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) -  

[jira] [Updated] (HIVE-11355) Hive on tez: memory manager for sort buffers (input/output) and operators

2016-02-10 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-11355:
--
Attachment: HIVE-11355.15.patch

Fix a bug in the code.

> Hive on tez: memory manager for sort buffers (input/output) and operators
> -
>
> Key: HIVE-11355
> URL: https://issues.apache.org/jira/browse/HIVE-11355
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Affects Versions: 2.0.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-11355.1.patch, HIVE-11355.10.patch, 
> HIVE-11355.11.patch, HIVE-11355.12.patch, HIVE-11355.13.patch, 
> HIVE-11355.14.patch, HIVE-11355.15.patch, HIVE-11355.2.patch, 
> HIVE-11355.3.patch, HIVE-11355.4.patch, HIVE-11355.5.patch, 
> HIVE-11355.6.patch, HIVE-11355.7.patch, HIVE-11355.8.patch, HIVE-11355.9.patch
>
>
> We need to better manage the sort buffer allocations to ensure better 
> performance. Also, we need to provide configurations to certain operators to 
> stay within memory limits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12967) Change LlapServiceDriver to read a properties file instead of llap-daemon-site

2016-02-10 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-12967:
---
Attachment: HIVE-12967.2.patch

[~sseth]: modifed the patch & removed the properties file entirely.

There is no llap-daemon-site configs in any file - it's either configured in 
hive-site.xml for the configs which make sense to share between HS2/AM/LLAP or 
read by the AM via ZK.

The hive-site.xml is included in the LlapDaemonConfiguration since now the LLAP 
daemons talk directly to the Metastore.

> Change LlapServiceDriver to read a properties file instead of llap-daemon-site
> --
>
> Key: HIVE-12967
> URL: https://issues.apache.org/jira/browse/HIVE-12967
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-12967.01.patch, HIVE-12967.1.wip.txt, 
> HIVE-12967.2.patch
>
>
> Having a copy of llap-daemon-site on the client node can be quite confusing, 
> since LlapServiceDriver generates the actual llap-daemon-site used by daemons.
> Instead of this - base settings can be picked up from a properties file.
> Also add java_home as a parameter to the script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1514#comment-1514
 ] 

Hive QA commented on HIVE-12994:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787117/HIVE-12994.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6938/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6938/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6938/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6938/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   2663f49..d7efa49  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 2663f49 HIVE-12987: Add metrics for HS2 active users and SQL 
operations(Jimmy, reviewed by Szehon, Aihua)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
+ git reset --hard origin/master
HEAD is now at d7efa49 HIVE-13038 : LLAP needs service class registration for 
token identifier (Sergey Shelukhin, reviewed by Prasanth Jayachandran)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787117 - PreCommit-HIVE-TRUNK-Build

> Implement support for NULLS FIRST/NULLS LAST
> 
>
> Key: HIVE-12994
> URL: https://issues.apache.org/jira/browse/HIVE-12994
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO, Metastore, Parser, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12994.01.patch, HIVE-12994.02.patch, 
> HIVE-12994.03.patch, HIVE-12994.patch
>
>
> From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to 
> determine whether nulls appear before or after non-null data values when the 
> ORDER BY clause is used.
> SQL standard does not specify the behavior by default. Currently in Hive, 
> null values sort as if lower than any non-null value; that is, NULLS FIRST is 
> the default for ASC order, and NULLS LAST for DESC order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9534) incorrect result set for query that projects a windowed aggregate

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142326#comment-15142326
 ] 

Hive QA commented on HIVE-9534:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787127/HIVE-9534.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9738 tests executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-auto_sortmerge_join_13.q-tez_self_join.q-schema_evol_text_nonvec_mapwork_table.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-dynpart_sort_optimization2.q-tez_bmj_schema_evolution.q-vector_char_mapjoin1.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6939/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6939/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6939/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787127 - PreCommit-HIVE-TRUNK-Build

> incorrect result set for query that projects a windowed aggregate
> -
>
> Key: HIVE-9534
> URL: https://issues.apache.org/jira/browse/HIVE-9534
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: N Campbell
>Assignee: Aihua Xu
> Attachments: HIVE-9534.1.patch, HIVE-9534.2.patch, HIVE-9534.3.patch, 
> HIVE-9534.4.patch
>
>
> Result set returned by Hive has one row instead of 5
> {code}
> select avg(distinct tsint.csint) over () from tsint 
> create table  if not exists TSINT (RNUM int , CSINT smallint)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS TEXTFILE;
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12941) Unexpected result when using MIN() on struct with NULL in first field

2016-02-10 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142223#comment-15142223
 ] 

Yongzhi Chen commented on HIVE-12941:
-

The failures are not related.
The test 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables 
passed in my local machine.
The other 3 failed many times in other pre-commit builds.


> Unexpected result when using MIN() on struct with NULL in first field
> -
>
> Key: HIVE-12941
> URL: https://issues.apache.org/jira/browse/HIVE-12941
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Jan-Erik Hedbom
>Assignee: Yongzhi Chen
> Attachments: HIVE-12941.1.patch, HIVE-12941.2.patch, 
> HIVE-12941.3.patch
>
>
> Using MIN() on struct with NULL in first field of a row yields NULL as result.
> Example:
> select min(a) FROM (select 1 as a union all select 2 as a union all select 
> cast(null as int) as a) tmp;
> OK
> _c0
> 1
> As expected. But if we wrap it in a struct:
> select min(a) FROM (select named_struct("field",1) as a union all select 
> named_struct("field",2) as a union all select named_struct("field",cast(null 
> as int)) as a) tmp;
> OK
> _c0
> NULL
> Using MAX() works as expected for structs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10187) Avro backed tables don't handle cyclical or recursive records

2016-02-10 Thread Mark Wagner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142238#comment-15142238
 ] 

Mark Wagner commented on HIVE-10187:


Those tests failures are unrelated and all the other precommit builds have been 
failing on the same tests for as long back as I can see.

> Avro backed tables don't handle cyclical or recursive records
> -
>
> Key: HIVE-10187
> URL: https://issues.apache.org/jira/browse/HIVE-10187
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 1.2.0
>Reporter: Mark Wagner
>Assignee: Mark Wagner
> Attachments: HIVE-10187.1.patch, HIVE-10187.2.patch, 
> HIVE-10187.3.patch, HIVE-10187.4.patch, HIVE-10187.5.patch, 
> HIVE-10187.demo.patch
>
>
> [HIVE-7653] changed the Avro SerDe to make it generate TypeInfos even for 
> recursive/cyclical schemas. However, any attempt to serialize data which 
> exploits that ability results in silently dropped fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13041) Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values

2016-02-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13041:

Attachment: HIVE-13041.1-branch1.patch

> Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values
> -
>
> Key: HIVE-13041
> URL: https://issues.apache.org/jira/browse/HIVE-13041
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13041.1-branch1.patch
>
>
> Backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12965) Insert overwrite local directory should perserve the overwritten directory permission

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142218#comment-15142218
 ] 

Hive QA commented on HIVE-12965:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787141/HIVE-12965.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 724 failed/errored test(s), 9753 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_add_part_multiple
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_allcolref_in_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_numbuckets_partitioned_table2_h23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_change_col
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_update_status
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_cascade
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_partition_drop
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_serde2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_update_status
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_varchar2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_table_null_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_tbl_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_excludeHadoop20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_multi
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_array_map_access_nonconstant
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_create_temp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_reordering_values
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_add_column2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_add_column3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_charvarchar
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_date
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_decimal
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_decimal_native
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_fields
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_partitioned_native
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_schema_evolution_native
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_timestamp
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_output_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_table_bincolserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_table_colserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_case_sensitivity
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cast1

[jira] [Commented] (HIVE-12967) Change LlapServiceDriver to read a properties file instead of llap-daemon-site

2016-02-10 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142311#comment-15142311
 ] 

Siddharth Seth commented on HIVE-12967:
---

I'm good with the updated patch and having the settings in hive-site.xml for 
now; there's still scope for confusion since the llap specific properties like 
cache size can be overridden by the service driver - in which case the values 
are not indicative of what's actually being used in the daemon. It does make 
overall deployment simpler as long as the name, ZK etc is not specified via 
"--name" or "--hiveconf". No more pasting configs into hive-site.xml.
+1. Would be good to add a warning note on the --name and --hiveconf options on 
potentially needing to copy parameters into hive-site.xml

In another jira, a llap-client config could be setup with the appropriate 
information, daemon configs picked up from a template properties file, and 
provided via command line to the script. llap-client would contain the name, ZK 
address - for discovery, and any configs required in the AM to communicate with 
LLAP. Would be nice if we could ship a metastore-client config file to LLAP 
instead of the entire hive-site.

> Change LlapServiceDriver to read a properties file instead of llap-daemon-site
> --
>
> Key: HIVE-12967
> URL: https://issues.apache.org/jira/browse/HIVE-12967
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-12967.01.patch, HIVE-12967.1.wip.txt, 
> HIVE-12967.2.patch
>
>
> Having a copy of llap-daemon-site on the client node can be quite confusing, 
> since LlapServiceDriver generates the actual llap-daemon-site used by daemons.
> Instead of this - base settings can be picked up from a properties file.
> Also add java_home as a parameter to the script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13041) Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values

2016-02-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13041:

Attachment: HIVE-13041.2-branch1.patch

> Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values
> -
>
> Key: HIVE-13041
> URL: https://issues.apache.org/jira/browse/HIVE-13041
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13041.1-branch1.patch, HIVE-13041.2-branch1.patch
>
>
> Backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13004) Remove encryption shims

2016-02-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13004:

Attachment: HIVE-13004.patch

> Remove encryption shims
> ---
>
> Key: HIVE-13004
> URL: https://issues.apache.org/jira/browse/HIVE-13004
> Project: Hive
>  Issue Type: Task
>  Components: Encryption
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13004.patch
>
>
> It has served its purpose. Now that we don't support hadoop-1, its no longer 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13041) Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values

2016-02-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142354#comment-15142354
 ] 

Matt McCline commented on HIVE-13041:
-

Build error does not make sense.

> Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values
> -
>
> Key: HIVE-13041
> URL: https://issues.apache.org/jira/browse/HIVE-13041
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13041.1-branch1.patch
>
>
> Backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13041) Backport to branch-1 HIVE-9862 Vectorized execution corrupts timestamp values

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142331#comment-15142331
 ] 

Hive QA commented on HIVE-13041:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787420/HIVE-13041.1-branch1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BRANCH_1-Build/23/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BRANCH_1-Build/23/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-BRANCH_1-Build-23/

Messages:
{noformat}
 This message was trimmed, see log for full details 
Generating class name FuncRoundDecimalToDecimal
Generating class name FuncBRoundDecimalToDecimal
Generating class name FuncNegateDecimalToDecimal
Generating class name CastDoubleToLong
Generating class name CastLongToDouble
Generating class name CastDoubleToBooleanViaDoubleToLong
Generating class name CastLongToBooleanViaLongToLong
Generating class name CastDateToBooleanViaLongToLong
Generating class name LongColUnaryMinus
Generating class name DoubleColUnaryMinus
Generating class name IfExprLongColumnLongColumn
Generating class name IfExprDoubleColumnDoubleColumn
Generating class name IfExprLongColumnLongScalar
Generating class name IfExprDoubleColumnLongScalar
Generating class name IfExprLongColumnDoubleScalar
Generating class name IfExprDoubleColumnDoubleScalar
Generating class name IfExprLongScalarLongColumn
Generating class name IfExprDoubleScalarLongColumn
Generating class name IfExprLongScalarDoubleColumn
Generating class name IfExprDoubleScalarDoubleColumn
Generating class name IfExprLongScalarLongScalar
Generating class name IfExprDoubleScalarLongScalar
Generating class name IfExprLongScalarDoubleScalar
Generating class name IfExprDoubleScalarDoubleScalar
Generating class name VectorUDAFMinLong
Generating class name VectorUDAFMinDouble
Generating class name VectorUDAFMaxLong
Generating class name VectorUDAFMaxDouble
Generating class name VectorUDAFMaxDecimal
Generating class name VectorUDAFMinDecimal
Generating class name VectorUDAFMinString
Generating class name VectorUDAFMaxString
Generating class name VectorUDAFMaxTimestamp
Generating class name VectorUDAFMinTimestamp
Generating class name VectorUDAFSumLong
Generating class name VectorUDAFSumDouble
Generating class name VectorUDAFAvgLong
Generating class name VectorUDAFAvgDouble
Generating class name VectorUDAFVarPopLong
Generating class name VectorUDAFVarPopDouble
Generating class name VectorUDAFVarPopDecimal
Generating class name VectorUDAFVarSampLong
Generating class name VectorUDAFVarSampDouble
Generating class name VectorUDAFVarSampDecimal
Generating class name VectorUDAFStdPopLong
Generating class name VectorUDAFStdPopDouble
Generating class name VectorUDAFStdPopDecimal
Generating class name VectorUDAFStdSampLong
Generating class name VectorUDAFStdSampDouble
Generating class name VectorUDAFStdSampDecimal
Generating vector expression test code
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec ---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-branch1-source/ql/src/gen/protobuf/gen-java
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-branch1-source/ql/src/gen/thrift/gen-javabean
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-branch1-source/ql/target/generated-sources/java
 added.
[INFO] 
[INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec ---
[INFO] ANTLR: Processing source directory 
/data/hive-ptest/working/apache-github-branch1-source/ql/src/java
ANTLR Parser Generator  Version 3.4
org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
warning(200): IdentifiersParser.g:455:5: 
Decision can match input such as "{KW_REGEXP, KW_RLIKE} KW_UNION KW_FROM" using 
multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:455:5: 
Decision can match input such as "{KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE KW_BY" 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:455:5: 
Decision can match input such as "{KW_REGEXP, KW_RLIKE} KW_INSERT KW_INTO" 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:455:5: 
Decision can match input such as "{KW_REGEXP, KW_RLIKE} KW_LATERAL KW_VIEW" 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:455:5: 
Decision can match input such as "{KW_REGEXP, KW_RLIKE} KW_SORT KW_BY" using 
multiple alternatives: 2, 9

As a result, 

[jira] [Commented] (HIVE-12592) Expose connection pool tuning props in TxnHandler

2016-02-10 Thread Chetna Chaudhari (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140559#comment-15140559
 ] 

Chetna Chaudhari commented on HIVE-12592:
-

[~ekoifman]: The BoneCPConfig() constructor loads the file from config path if 
you specify file with "bonecp-config.xml" or "bonecp-default-config.xml" name. 
So I think we don't need to modify anything in code TxnHandler.

> Expose connection pool tuning props in TxnHandler
> -
>
> Key: HIVE-12592
> URL: https://issues.apache.org/jira/browse/HIVE-12592
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Chetna Chaudhari
>
> BoneCP allows various pool tuning options like connection timeout, num 
> connections, etc
> There should be a config based way to set these



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12064) prevent transactional=false

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140574#comment-15140574
 ] 

Hive QA commented on HIVE-12064:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787011/HIVE-12064.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 9773 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_delete_not_bucketed
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_change_fileformat_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_change_serde_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_reorder_columns1_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_reorder_columns2_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_replace_columns1_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_replace_columns2_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_replace_columns3_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_type_promotion1_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_type_promotion2_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_orc_type_promotion3_acid
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_update_not_bucketed
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testPartitionFilter
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testPartitionFilter
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testTransactionalValidation
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testPartitionFilter
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testTransactionalValidation
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testPartitionFilter
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testTransactionalValidation
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyServer.testPartitionFilter
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyServer.testTransactionalValidation
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testStatsAfterCompactionPartTbl
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testMulti
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchAbort
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchCommitPartitioned
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchCommitUnpartitioned
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchEmptyAbortPartitioned
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchEmptyAbortUnartitioned
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchEmptyCommitPartitioned
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchEmptyCommitUnpartitioned
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testUpdatesAndDeletes
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6930/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6930/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6930/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 33 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787011 - PreCommit-HIVE-TRUNK-Build

> prevent transactional=false
> ---
>
> Key: HIVE-12064
> URL: https://issues.apache.org/jira/browse/HIVE-12064
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Critical
> Attachments: HIVE-12064.2.patch, HIVE-12064.3.patch, HIVE-12064.patch
>
>
> currently a tblproperty transactional=true must be set to make a table behave 
> in ACID compliant way.
> This is misleading in that it seems like changing it to transactional=false 
> makes the table non-acid but on disk layout of acid table is different than 
> plain tables.  So changing this  

[jira] [Commented] (HIVE-12730) MetadataUpdater: provide a mechanism to edit the basic statistics of a table (or a partition)

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140578#comment-15140578
 ] 

Hive QA commented on HIVE-12730:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787022/HIVE-12730.07.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6931/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6931/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6931/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6931/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2663f49 HIVE-12987: Add metrics for HS2 active users and SQL 
operations(Jimmy, reviewed by Szehon, Aihua)
+ git clean -f -d
Removing 
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java.orig
Removing 
metastore/src/java/org/apache/hadoop/hive/metastore/TransactionalValidationListener.java
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 2663f49 HIVE-12987: Add metrics for HS2 active users and SQL 
operations(Jimmy, reviewed by Szehon, Aihua)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787022 - PreCommit-HIVE-TRUNK-Build

> MetadataUpdater: provide a mechanism to edit the basic statistics of a table 
> (or a partition)
> -
>
> Key: HIVE-12730
> URL: https://issues.apache.org/jira/browse/HIVE-12730
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12730.01.patch, HIVE-12730.02.patch, 
> HIVE-12730.03.patch, HIVE-12730.04.patch, HIVE-12730.05.patch, 
> HIVE-12730.06.patch, HIVE-12730.07.patch
>
>
> We would like to provide a way for developers/users to modify the numRows and 
> dataSize for a table/partition. Right now although they are part of the table 
> properties, they will be set to -1 when the task is not coming from a 
> statsTask. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10187) Avro backed tables don't handle cyclical or recursive records

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140475#comment-15140475
 ] 

Hive QA commented on HIVE-10187:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786988/HIVE-10187.5.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9768 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6929/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6929/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6929/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786988 - PreCommit-HIVE-TRUNK-Build

> Avro backed tables don't handle cyclical or recursive records
> -
>
> Key: HIVE-10187
> URL: https://issues.apache.org/jira/browse/HIVE-10187
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 1.2.0
>Reporter: Mark Wagner
>Assignee: Mark Wagner
> Attachments: HIVE-10187.1.patch, HIVE-10187.2.patch, 
> HIVE-10187.3.patch, HIVE-10187.4.patch, HIVE-10187.5.patch, 
> HIVE-10187.demo.patch
>
>
> [HIVE-7653] changed the Avro SerDe to make it generate TypeInfos even for 
> recursive/cyclical schemas. However, any attempt to serialize data which 
> exploits that ability results in silently dropped fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12973) Create Debian in Hive packaging module

2016-02-10 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu resolved HIVE-12973.

Resolution: Duplicate

> Create Debian in Hive packaging module
> --
>
> Key: HIVE-12973
> URL: https://issues.apache.org/jira/browse/HIVE-12973
> Project: Hive
>  Issue Type: New Feature
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11749) Deadlock of fetching InputFormat table when multiple root stage

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140718#comment-15140718
 ] 

Hive QA commented on HIVE-11749:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787053/HIVE-11749.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9753 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarDataNucleusUnCaching
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6932/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6932/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6932/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787053 - PreCommit-HIVE-TRUNK-Build

> Deadlock of fetching InputFormat table when multiple root stage
> ---
>
> Key: HIVE-11749
> URL: https://issues.apache.org/jira/browse/HIVE-11749
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Ryu Kobayashi
>Assignee: Kai Sasaki
> Attachments: HIVE-11749.00.patch, HIVE-11749.01.patch, 
> HIVE-11749.stack-tarace.txt
>
>
> But not always, to deadlock when it run the query. Environment are as follows:
> * Hadoop 2.6.0
> * Hive 0.13
> * JDK 1.7.0_79
> It will attach the stack trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12730) MetadataUpdater: provide a mechanism to edit the basic statistics of a table (or a partition)

2016-02-10 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12730:
---
Attachment: HIVE-12730.08.patch

> MetadataUpdater: provide a mechanism to edit the basic statistics of a table 
> (or a partition)
> -
>
> Key: HIVE-12730
> URL: https://issues.apache.org/jira/browse/HIVE-12730
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12730.01.patch, HIVE-12730.02.patch, 
> HIVE-12730.03.patch, HIVE-12730.04.patch, HIVE-12730.05.patch, 
> HIVE-12730.06.patch, HIVE-12730.07.patch, HIVE-12730.08.patch
>
>
> We would like to provide a way for developers/users to modify the numRows and 
> dataSize for a table/partition. Right now although they are part of the table 
> properties, they will be set to -1 when the task is not coming from a 
> statsTask. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol

2016-02-10 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141921#comment-15141921
 ] 

Xuefu Zhang commented on HIVE-11866:


+1 latest patch looks good to me.

> Add framework to enable testing using LDAPServer using LDAP protocol
> 
>
> Key: HIVE-11866
> URL: https://issues.apache.org/jira/browse/HIVE-11866
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, 
> HIVE-11866.4.patch, HIVE-11866.patch
>
>
> Currently there is no unit test coverage for HS2's LDAP Atn provider using a 
> LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with 
> some realistic usecases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11866) Add framework to enable testing using LDAPServer using LDAP protocol

2016-02-10 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141971#comment-15141971
 ] 

Thejas M Nair commented on HIVE-11866:
--

Looks great from the library use perspective! Thanks for following up on this!


> Add framework to enable testing using LDAPServer using LDAP protocol
> 
>
> Key: HIVE-11866
> URL: https://issues.apache.org/jira/browse/HIVE-11866
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-11866.2.patch, HIVE-11866.3.patch, 
> HIVE-11866.4.patch, HIVE-11866.patch
>
>
> Currently there is no unit test coverage for HS2's LDAP Atn provider using a 
> LDAP Server on the backend. This prevents testing of the LDAPAtnProvider with 
> some realistic usecases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13040) Handle empty bucket creations more efficiently

2016-02-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13040:

Attachment: HIVE-13040.patch

> Handle empty bucket creations more efficiently 
> ---
>
> Key: HIVE-13040
> URL: https://issues.apache.org/jira/browse/HIVE-13040
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.0.0, 1.2.0, 1.1.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13040.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12049) Provide an option to write serialized thrift objects in final tasks

2016-02-10 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-12049:

Attachment: HIVE-12049.4.patch

So uploading an end to end patch here which will need some testing and 
improvement.

> Provide an option to write serialized thrift objects in final tasks
> ---
>
> Key: HIVE-12049
> URL: https://issues.apache.org/jira/browse/HIVE-12049
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Rohit Dholakia
>Assignee: Rohit Dholakia
> Attachments: HIVE-12049.1.patch, HIVE-12049.2.patch, 
> HIVE-12049.3.patch, HIVE-12049.4.patch
>
>
> For each fetch request to HiveServer2, we pay the penalty of deserializing 
> the row objects and translating them into a different representation suitable 
> for the RPC transfer. In a moderate to high concurrency scenarios, this can 
> result in significant CPU and memory wastage. By having each task write the 
> appropriate thrift objects to the output files, HiveServer2 can simply stream 
> a batch of rows on the wire without incurring any of the additional cost of 
> deserialization and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator 
> can use to write thrift formatted row batches to the output file. Using the 
> pluggable property of the {{hive.query.result.fileformat}}, we can set it to 
> use SequenceFile and write a batch of thrift formatted rows as a value blob. 
> The FetchTask can now simply read the blob and send it over the wire. On the 
> client side, the *DBC driver can read the blob and since it is already 
> formatted in the way it expects, it can continue building the ResultSet the 
> way it does in the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13038) LLAP needs service class registration for token identifier

2016-02-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142001#comment-15142001
 ] 

Prasanth Jayachandran commented on HIVE-13038:
--

+1

> LLAP needs service class registration for token identifier
> --
>
> Key: HIVE-13038
> URL: https://issues.apache.org/jira/browse/HIVE-13038
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13038.patch
>
>
> I saw some warnings in a failed security test. Whether they have any effect 
> is an open question, since the test failure is not systematic (it passes or 
> fails depending on some seemingly unrelated configuration), but I'm going fix 
> it anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12941) Unexpected result when using MIN() on struct with NULL in first field

2016-02-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142088#comment-15142088
 ] 

Hive QA commented on HIVE-12941:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12787108/HIVE-12941.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9754 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarDataNucleusUnCaching
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6936/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6936/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6936/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12787108 - PreCommit-HIVE-TRUNK-Build

> Unexpected result when using MIN() on struct with NULL in first field
> -
>
> Key: HIVE-12941
> URL: https://issues.apache.org/jira/browse/HIVE-12941
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Jan-Erik Hedbom
>Assignee: Yongzhi Chen
> Attachments: HIVE-12941.1.patch, HIVE-12941.2.patch, 
> HIVE-12941.3.patch
>
>
> Using MIN() on struct with NULL in first field of a row yields NULL as result.
> Example:
> select min(a) FROM (select 1 as a union all select 2 as a union all select 
> cast(null as int) as a) tmp;
> OK
> _c0
> 1
> As expected. But if we wrap it in a struct:
> select min(a) FROM (select named_struct("field",1) as a union all select 
> named_struct("field",2) as a union all select named_struct("field",cast(null 
> as int)) as a) tmp;
> OK
> _c0
> NULL
> Using MAX() works as expected for structs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11097) HiveInputFormat uses String.startsWith to compare splitPath and PathToAliases

2016-02-10 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142094#comment-15142094
 ] 

Shannon Ladymon commented on HIVE-11097:


[~prasanth_j], thank you for the details on how to add a test case.  I've added 
this to the wiki.  Please let me know if there are any changes/clarifications 
needed to what I put in the wiki:
* [Hive Developer FAQ - How do I add a test case? | 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ#HiveDeveloperFAQ-HowdoIaddatestcase?]

> HiveInputFormat uses String.startsWith to compare splitPath and PathToAliases
> -
>
> Key: HIVE-11097
> URL: https://issues.apache.org/jira/browse/HIVE-11097
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.13.0, 0.14.0, 0.13.1, 1.0.0, 1.2.0
> Environment: Hive 0.13.1, Hive 2.0.0, hadoop 2.4.1
>Reporter: Wan Chang
>Assignee: Wan Chang
>Priority: Critical
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-11097.1.patch, HIVE-11097.2.patch, 
> HIVE-11097.3.patch, HIVE-11097.4.patch, HIVE-11097.5.patch
>
>
> Say we have a sql as
> {code}
> create table if not exists test_orc_src (a int, b int, c int) stored as orc;
> create table if not exists test_orc_src2 (a int, b int, d int) stored as orc;
> insert overwrite table test_orc_src select 1,2,3 from src limit 1;
> insert overwrite table test_orc_src2 select 1,2,4 from src limit 1;
> set hive.auto.convert.join = false;
> set hive.execution.engine=mr;
> select
>   tb.c
> from test.test_orc_src tb
> join (select * from test.test_orc_src2) tm
> on tb.a = tm.a
> where tb.b = 2
> {code}
> The correct result is 3 but it produced no result.
> I find that in HiveInputFormat.pushProjectionsAndFilters
> {code}
> match = splitPath.startsWith(key) || splitPathWithNoSchema.startsWith(key);
> {code}
> It uses startsWith to combine aliases with path, so tm will match two alias 
> in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11355) Hive on tez: memory manager for sort buffers (input/output) and operators

2016-02-10 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-11355:
--
Attachment: HIVE-11355.16.patch

Golden file updates.

> Hive on tez: memory manager for sort buffers (input/output) and operators
> -
>
> Key: HIVE-11355
> URL: https://issues.apache.org/jira/browse/HIVE-11355
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Affects Versions: 2.0.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-11355.1.patch, HIVE-11355.10.patch, 
> HIVE-11355.11.patch, HIVE-11355.12.patch, HIVE-11355.13.patch, 
> HIVE-11355.14.patch, HIVE-11355.15.patch, HIVE-11355.16.patch, 
> HIVE-11355.2.patch, HIVE-11355.3.patch, HIVE-11355.4.patch, 
> HIVE-11355.5.patch, HIVE-11355.6.patch, HIVE-11355.7.patch, 
> HIVE-11355.8.patch, HIVE-11355.9.patch
>
>
> We need to better manage the sort buffer allocations to ensure better 
> performance. Also, we need to provide configurations to certain operators to 
> stay within memory limits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13020) Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK

2016-02-10 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142137#comment-15142137
 ] 

Greg Senia commented on HIVE-13020:
---

[~thejas] and [~gopalv] no problem

> Hive Metastore and HiveServer2 to Zookeeper fails with IBM JDK
> --
>
> Key: HIVE-13020
> URL: https://issues.apache.org/jira/browse/HIVE-13020
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Metastore, Shims
>Affects Versions: 1.2.0, 1.3.0, 1.2.1
> Environment: Linux X86_64 and IBM JDK 8
>Reporter: Greg Senia
>Assignee: Greg Senia
>  Labels: hdp, ibm, ibm-jdk
> Attachments: HIVE-13020.patch, hivemetastore_afterpatch.txt, 
> hivemetastore_beforepatch.txt, hiveserver2_afterpatch.txt, 
> hiveserver2_beforepatch.txt
>
>
> HiveServer2 and Hive Metastore Zookeeper component is hardcoded to only 
> support the Oracle/Open JDK. I was performing testing of Hadoop running on 
> the IBM JDK and discovered this issue and have since drawn up the attached 
> patch. This looks to resolve the issue in a similar manner as how the Hadoop 
> core folks handle the IBM JDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)