[jira] [Updated] (HIVE-13199) NDC stopped working in LLAP logging

2016-03-05 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13199:
-
Fix Version/s: 2.1.0

> NDC stopped working in LLAP logging
> ---
>
> Key: HIVE-13199
> URL: https://issues.apache.org/jira/browse/HIVE-13199
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Logging
>Affects Versions: 2.1.0
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Fix For: 2.1.0
>
> Attachments: HIVE-13199.1.patch
>
>
> NDC context were missing from the log lines. Reason for it is NDC class is 
> part of log4j-1.2-api (bridge jar). This is added as compile time dependency. 
> Due to the absence of this jar in llap daemons, the NDC context failed to 
> initialize. Log4j2 replaced NDC with ThreadContext. Hence we need the bridge 
> jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13199) NDC stopped working in LLAP logging

2016-03-05 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13199:
-
  Resolution: Fixed
Target Version/s: 2.1.0, 2.0.1
  Status: Resolved  (was: Patch Available)

Committed to master. Also marked target version as 2.0.1 for inclusion to 
branch-2.0

> NDC stopped working in LLAP logging
> ---
>
> Key: HIVE-13199
> URL: https://issues.apache.org/jira/browse/HIVE-13199
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Logging
>Affects Versions: 2.1.0
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13199.1.patch
>
>
> NDC context were missing from the log lines. Reason for it is NDC class is 
> part of log4j-1.2-api (bridge jar). This is added as compile time dependency. 
> Due to the absence of this jar in llap daemons, the NDC context failed to 
> initialize. Log4j2 replaced NDC with ThreadContext. Hence we need the bridge 
> jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13199) NDC stopped working in LLAP logging

2016-03-05 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15182028#comment-15182028
 ] 

Prasanth Jayachandran commented on HIVE-13199:
--

Patch yet to be committed for 2.0.1 as it depends on other logging patches to 
be committed first.

> NDC stopped working in LLAP logging
> ---
>
> Key: HIVE-13199
> URL: https://issues.apache.org/jira/browse/HIVE-13199
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Logging
>Affects Versions: 2.1.0
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Fix For: 2.1.0
>
> Attachments: HIVE-13199.1.patch
>
>
> NDC context were missing from the log lines. Reason for it is NDC class is 
> part of log4j-1.2-api (bridge jar). This is added as compile time dependency. 
> Due to the absence of this jar in llap daemons, the NDC context failed to 
> initialize. Log4j2 replaced NDC with ThreadContext. Hence we need the bridge 
> jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13211) normalize Hive.get overloads to go thru one path

2016-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15182023#comment-15182023
 ] 

Ashutosh Chauhan commented on HIVE-13211:
-

+1

> normalize Hive.get overloads to go thru one path
> 
>
> Key: HIVE-13211
> URL: https://issues.apache.org/jira/browse/HIVE-13211
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13211.patch
>
>
> There are multiple subtly different paths in Hive.get(...) methods. Some 
> close the old db on refresh, some don't. Some check if the client is 
> compatible with config, some don't. Also there were some parameters (don't 
> register functions, disallow embedded metastore) that were added recently.
> Need to make this stuff go thru one path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12049) Provide an option to write serialized thrift objects in final tasks

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15182022#comment-15182022
 ] 

Hive QA commented on HIVE-12049:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791362/HIVE-12049.11.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 9781 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnStatsUpdateForStatsOptimizer_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnStatsUpdateForStatsOptimizer_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_only_null
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_truncate_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_remove_26
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_into2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_stats_only_null
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_into2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats_only_null
org.apache.hive.beeline.TestBeeLineWithArgs.testEmbeddedBeelineOutputs
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel
org.apache.hive.jdbc.TestJdbcDriver2.testExplainStmt
org.apache.hive.jdbc.TestJdbcDriver2.testGetQueryLog
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7174/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7174/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7174/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 30 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791362 - PreCommit-HIVE-TRUNK-Build

> Provide an option to write serialized thrift objects in final tasks
> ---
>
> Key: HIVE-12049
> URL: https://issues.apache.org/jira/browse/HIVE-12049
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Rohit Dholakia
>Assignee: Rohit Dholakia
> Attachments: HIVE-12049.1.patch, HIVE-12049.11.patch, 
> HIVE-12049.2.patch, HIVE-12049.3.patch, HIVE-12049.4.patch, 
> HIVE-12049.5.patch, HIVE-12049.6.patch, HIVE-12049.7.patch, HIVE-12049.9.patch
>
>
> For each fetch request to HiveServer2, we pay the penalty of deserializing 
> the row objects and translating them into a different representation suitable 
> for the RPC transfer. In a moderate to high concurrency scenarios, this can 
> result in significant CPU and memory wastage. By having each task write the 
> appropriate thrift objects to the output files, HiveServer2 can simply stream 
> a batch of rows on the wire without incurring any of the additional cost of 
> deserialization and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator 
> can use to write thrift formatted row 

[jira] [Commented] (HIVE-13201) Compaction shouldn't be allowed on non-ACID table

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181981#comment-15181981
 ] 

Hive QA commented on HIVE-13201:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791364/HIVE-13201.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 9767 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_compact1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_compact2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_compact3
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7173/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7173/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7173/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791364 - PreCommit-HIVE-TRUNK-Build

> Compaction shouldn't be allowed on non-ACID table
> -
>
> Key: HIVE-13201
> URL: https://issues.apache.org/jira/browse/HIVE-13201
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-13201.1.patch
>
>
> Looks like compaction is allowed on non-ACID table, although that's of no 
> sense and does nothing. Moreover the compaction request will be enqueued into 
> COMPACTION_QUEUE metastore table, which brings unnecessary overhead.
> We should prevent compaction commands being allowed on non-ACID tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12995) LLAP: Synthetic file ids need collision checks

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181931#comment-15181931
 ] 

Hive QA commented on HIVE-12995:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791275/HIVE-12995.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9781 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.llap.cache.TestIncrementalObjectSizeEstimator.testMetadata
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7172/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7172/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7172/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791275 - PreCommit-HIVE-TRUNK-Build

> LLAP: Synthetic file ids need collision checks
> --
>
> Key: HIVE-12995
> URL: https://issues.apache.org/jira/browse/HIVE-12995
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12995.01.patch, HIVE-12995.patch
>
>
> LLAP synthetic file ids do not have any way of checking whether a collision 
> occurs other than a data-error.
> Synthetic file-ids have only been used with unit tests so far - but they will 
> be needed to add cache mechanisms to non-HDFS filesystems.
> In case of Synthetic file-ids, it is recommended that we track the full-tuple 
> (path, mtime, len) in the cache so that a cache-hit for the synthetic file-id 
> can be compared against the parameters & only accepted if those match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13178) Enhance ORC Schema Evolution to handle more standard data type conversions

2016-03-05 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13178:

Attachment: HIVE-13178.03.patch

> Enhance ORC Schema Evolution to handle more standard data type conversions
> --
>
> Key: HIVE-13178
> URL: https://issues.apache.org/jira/browse/HIVE-13178
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13178.01.patch, HIVE-13178.02.patch, 
> HIVE-13178.03.patch
>
>
> Currently, SHORT -> INT -> BIGINT is supported.
> Handle ORC data type conversions permitted by Implicit conversion allowed by 
> TypeIntoUtils.implicitConvertible method.
>*   STRING_GROUP -> DOUBLE
>*   STRING_GROUP -> DECIMAL
>*   DATE_GROUP -> STRING
>*   NUMERIC_GROUP -> STRING
>*   STRING_GROUP -> STRING_GROUP
>*
>*   // Upward from "lower" type to "higher" numeric type:
>*   BYTE -> SHORT -> INT -> BIGINT -> FLOAT -> DOUBLE -> DECIMAL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13178) Enhance ORC Schema Evolution to handle more standard data type conversions

2016-03-05 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13178:

Attachment: (was: HIVE-13178.03.patch)

> Enhance ORC Schema Evolution to handle more standard data type conversions
> --
>
> Key: HIVE-13178
> URL: https://issues.apache.org/jira/browse/HIVE-13178
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13178.01.patch, HIVE-13178.02.patch
>
>
> Currently, SHORT -> INT -> BIGINT is supported.
> Handle ORC data type conversions permitted by Implicit conversion allowed by 
> TypeIntoUtils.implicitConvertible method.
>*   STRING_GROUP -> DOUBLE
>*   STRING_GROUP -> DECIMAL
>*   DATE_GROUP -> STRING
>*   NUMERIC_GROUP -> STRING
>*   STRING_GROUP -> STRING_GROUP
>*
>*   // Upward from "lower" type to "higher" numeric type:
>*   BYTE -> SHORT -> INT -> BIGINT -> FLOAT -> DOUBLE -> DECIMAL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9313) thrift.transport.TTransportException [Spark Branch]

2016-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181883#comment-15181883
 ] 

Jakob Stengård commented on HIVE-9313:
--

Or rather, it hangs forever.

> thrift.transport.TTransportException [Spark Branch]
> ---
>
> Key: HIVE-9313
> URL: https://issues.apache.org/jira/browse/HIVE-9313
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chao Sun
>Assignee: Chao Sun
>
> Running beeline with TPC-DS queries sometimes give the following exception:
> {noformat}
> 2015-01-07 22:01:22,421 ERROR [HiveServer2-Handler-Pool: Thread-29]: 
> server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred 
> during processing of message.
> java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
> Invalid status 71
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException: Invalid status 71
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> at 
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
> at 
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> ... 4 more
> {noformat}
> We need to investigate on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9313) thrift.transport.TTransportException [Spark Branch]

2016-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181884#comment-15181884
 ] 

Jakob Stengård commented on HIVE-9313:
--

Or rather, it hangs forever.

> thrift.transport.TTransportException [Spark Branch]
> ---
>
> Key: HIVE-9313
> URL: https://issues.apache.org/jira/browse/HIVE-9313
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chao Sun
>Assignee: Chao Sun
>
> Running beeline with TPC-DS queries sometimes give the following exception:
> {noformat}
> 2015-01-07 22:01:22,421 ERROR [HiveServer2-Handler-Pool: Thread-29]: 
> server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred 
> during processing of message.
> java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
> Invalid status 71
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException: Invalid status 71
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> at 
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
> at 
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> ... 4 more
> {noformat}
> We need to investigate on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9313) thrift.transport.TTransportException [Spark Branch]

2016-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181881#comment-15181881
 ] 

Jakob Stengård commented on HIVE-9313:
--

I have this issue. It seems like it makes drop table to fail in hiveserver2.


> thrift.transport.TTransportException [Spark Branch]
> ---
>
> Key: HIVE-9313
> URL: https://issues.apache.org/jira/browse/HIVE-9313
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chao Sun
>Assignee: Chao Sun
>
> Running beeline with TPC-DS queries sometimes give the following exception:
> {noformat}
> 2015-01-07 22:01:22,421 ERROR [HiveServer2-Handler-Pool: Thread-29]: 
> server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred 
> during processing of message.
> java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
> Invalid status 71
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException: Invalid status 71
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> at 
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
> at 
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
> at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
> at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> ... 4 more
> {noformat}
> We need to investigate on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13214) Duplicate MySQL Indexes

2016-03-05 Thread Ryan Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Lowe updated HIVE-13214:
-
Component/s: Metastore

> Duplicate MySQL Indexes
> ---
>
> Key: HIVE-13214
> URL: https://issues.apache.org/jira/browse/HIVE-13214
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Ryan Lowe
>Assignee: Ryan Lowe
>Priority: Minor
> Attachments: HIVE-13214.patch
>
>
> Running pt-duplicate-key-checker 
> (https://www.percona.com/doc/percona-toolkit/2.2/pt-duplicate-key-checker.html)
>  against the schema generated from 
> metastore/scripts/upgrade/mysql/hive-schema-2.1.0.mysql.sql, the following 
> duplicate indexes are found:
> {code}
> # 
> # test.BUCKETING_COLS 
> # 
> # BUCKETING_COLS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `BUCKETING_COLS_N49` (`SD_ID`),
> #   PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
> # Column types:
> #   `sd_id` bigint(20) not null
> #   `integer_idx` int(11) not null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`BUCKETING_COLS` DROP INDEX `BUCKETING_COLS_N49`;
> # 
> # test.COLUMNS_V2 
> # 
> # COLUMNS_V2_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `COLUMNS_V2_N49` (`CD_ID`),
> #   PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
> # Column types:
> #   `cd_id` bigint(20) not null
> #   `column_name` varchar(767) character set latin1 collate latin1_bin 
> not null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`COLUMNS_V2` DROP INDEX `COLUMNS_V2_N49`;
> # 
> # test.DATABASE_PARAMS
> # 
> # DATABASE_PARAMS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `DATABASE_PARAMS_N49` (`DB_ID`),
> #   PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
> # Column types:
> #   `db_id` bigint(20) not null
> #   `param_key` varchar(180) character set latin1 collate latin1_bin not 
> null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`DATABASE_PARAMS` DROP INDEX `DATABASE_PARAMS_N49`;
> # 
> # test.DB_PRIVS   
> # 
> # DB_PRIVS_N49 is a left-prefix of DBPRIVILEGEINDEX
> # Key definitions:
> #   KEY `DB_PRIVS_N49` (`DB_ID`),
> #   UNIQUE KEY `DBPRIVILEGEINDEX` 
> (`DB_ID`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`DB_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
> # Column types:
> #   `db_id` bigint(20) default null
> #   `principal_name` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `principal_type` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `db_priv` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `grantor` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `grantor_type` varchar(128) character set latin1 collate latin1_bin 
> default null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`DB_PRIVS` DROP INDEX `DB_PRIVS_N49`;
> # 
> # test.INDEX_PARAMS   
> # 
> # INDEX_PARAMS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `INDEX_PARAMS_N49` (`INDEX_ID`),
> #   PRIMARY KEY (`INDEX_ID`,`PARAM_KEY`),
> # Column types:
> #   `index_id` bigint(20) not null
> #   `param_key` varchar(256) character set latin1 collate latin1_bin not 
> null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`INDEX_PARAMS` DROP INDEX `INDEX_PARAMS_N49`;
> # 
> # test.PARTITION_KEYS 
> # 
> # PARTITION_KEYS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `PARTITION_KEYS_N49` (`TBL_ID`),
> #   PRIMARY KEY (`TBL_ID`,`PKEY_NAME`),
> # Column types:
> #   `tbl_id` bigint(20) not null
> #   `pkey_name` 

[jira] [Updated] (HIVE-13214) Duplicate MySQL Indexes

2016-03-05 Thread Ryan Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Lowe updated HIVE-13214:
-
Attachment: HIVE-13214.patch

> Duplicate MySQL Indexes
> ---
>
> Key: HIVE-13214
> URL: https://issues.apache.org/jira/browse/HIVE-13214
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Ryan Lowe
>Assignee: Ryan Lowe
>Priority: Minor
> Attachments: HIVE-13214.patch
>
>
> Running pt-duplicate-key-checker 
> (https://www.percona.com/doc/percona-toolkit/2.2/pt-duplicate-key-checker.html)
>  against the schema generated from 
> metastore/scripts/upgrade/mysql/hive-schema-2.1.0.mysql.sql, the following 
> duplicate indexes are found:
> {code}
> # 
> # test.BUCKETING_COLS 
> # 
> # BUCKETING_COLS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `BUCKETING_COLS_N49` (`SD_ID`),
> #   PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
> # Column types:
> #   `sd_id` bigint(20) not null
> #   `integer_idx` int(11) not null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`BUCKETING_COLS` DROP INDEX `BUCKETING_COLS_N49`;
> # 
> # test.COLUMNS_V2 
> # 
> # COLUMNS_V2_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `COLUMNS_V2_N49` (`CD_ID`),
> #   PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
> # Column types:
> #   `cd_id` bigint(20) not null
> #   `column_name` varchar(767) character set latin1 collate latin1_bin 
> not null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`COLUMNS_V2` DROP INDEX `COLUMNS_V2_N49`;
> # 
> # test.DATABASE_PARAMS
> # 
> # DATABASE_PARAMS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `DATABASE_PARAMS_N49` (`DB_ID`),
> #   PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
> # Column types:
> #   `db_id` bigint(20) not null
> #   `param_key` varchar(180) character set latin1 collate latin1_bin not 
> null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`DATABASE_PARAMS` DROP INDEX `DATABASE_PARAMS_N49`;
> # 
> # test.DB_PRIVS   
> # 
> # DB_PRIVS_N49 is a left-prefix of DBPRIVILEGEINDEX
> # Key definitions:
> #   KEY `DB_PRIVS_N49` (`DB_ID`),
> #   UNIQUE KEY `DBPRIVILEGEINDEX` 
> (`DB_ID`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`DB_PRIV`,`GRANTOR`,`GRANTOR_TYPE`),
> # Column types:
> #   `db_id` bigint(20) default null
> #   `principal_name` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `principal_type` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `db_priv` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `grantor` varchar(128) character set latin1 collate latin1_bin 
> default null
> #   `grantor_type` varchar(128) character set latin1 collate latin1_bin 
> default null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`DB_PRIVS` DROP INDEX `DB_PRIVS_N49`;
> # 
> # test.INDEX_PARAMS   
> # 
> # INDEX_PARAMS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `INDEX_PARAMS_N49` (`INDEX_ID`),
> #   PRIMARY KEY (`INDEX_ID`,`PARAM_KEY`),
> # Column types:
> #   `index_id` bigint(20) not null
> #   `param_key` varchar(256) character set latin1 collate latin1_bin not 
> null
> # To remove this duplicate index, execute:
> ALTER TABLE `test`.`INDEX_PARAMS` DROP INDEX `INDEX_PARAMS_N49`;
> # 
> # test.PARTITION_KEYS 
> # 
> # PARTITION_KEYS_N49 is a left-prefix of PRIMARY
> # Key definitions:
> #   KEY `PARTITION_KEYS_N49` (`TBL_ID`),
> #   PRIMARY KEY (`TBL_ID`,`PKEY_NAME`),
> # Column types:
> #   `tbl_id` bigint(20) not null
> #   `pkey_name` 

[jira] [Updated] (HIVE-13149) Remove some unnecessary HMS connections from HS2

2016-03-05 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13149:

Attachment: HIVE-13149.3.patch

> Remove some unnecessary HMS connections from HS2 
> -
>
> Key: HIVE-13149
> URL: https://issues.apache.org/jira/browse/HIVE-13149
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13149.1.patch, HIVE-13149.2.patch, 
> HIVE-13149.3.patch
>
>
> In SessionState class, currently we will always try to get a HMS connection 
> in {{start(SessionState startSs, boolean isAsync, LogHelper console)}} 
> regardless of if the connection will be used later or not. 
> When SessionState is accessed by the tasks in TaskRunner.java, although most 
> of the tasks other than some like StatsTask, don't need to access HMS. 
> Currently a new HMS connection will be established for each Task thread. If 
> HiveServer2 is configured to run in parallel and the query involves many 
> tasks, then the connections are created but unused.
> {noformat}
>   @Override
>   public void run() {
> runner = Thread.currentThread();
> try {
>   OperationLog.setCurrentOperationLog(operationLog);
>   SessionState.start(ss);
>   runSequential();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13171) Add unit test for hs2 webui

2016-03-05 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181866#comment-15181866
 ] 

Aihua Xu commented on HIVE-13171:
-

+1. Looks good to me.

> Add unit test for hs2 webui
> ---
>
> Key: HIVE-13171
> URL: https://issues.apache.org/jira/browse/HIVE-13171
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Szehon Ho
>Assignee: Szehon Ho
> Attachments: HIVE-13171.2.patch, HIVE-13171.3.patch, HIVE-13171.patch
>
>
> With more complex changes going into webui, it is hard to manually verify all 
> the kinds of cases.
> With HIVE-12952, HS2 webui now uses jamon, which should be more unit-testable 
> than plain old jsp.  We can perhaps add unit test for the jamon servlets, or 
> test the new OperationDisplay classes queried by the servlets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12878) Support Vectorization for TEXTFILE and other formats

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181856#comment-15181856
 ] 

Hive QA commented on HIVE-12878:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791258/HIVE-12878.06.patch

{color:green}SUCCESS:{color} +1 due to 21 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 9766 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-stats13.q-groupby6_map.q-join_casesensitive.q-and-12-more - 
did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_partition_diff_num_cols
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_ptf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_windowing_streaming
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_llap_nullscan
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_llap_nullscan
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_partition_diff_num_cols
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_ptf
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testVectorization
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testVectorizationWithAcid
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testVectorizationWithBuckets
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7171/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7171/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7171/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791258 - PreCommit-HIVE-TRUNK-Build

> Support Vectorization for TEXTFILE and other formats
> 
>
> Key: HIVE-12878
> URL: https://issues.apache.org/jira/browse/HIVE-12878
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-12878.01.patch, HIVE-12878.02.patch, 
> HIVE-12878.03.patch, HIVE-12878.04.patch, HIVE-12878.05.patch, 
> HIVE-12878.06.patch
>
>
> Support vectorizing when the input format is TEXTFILE and other formats for 
> better Map Vertex performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12244) Refactoring code for avoiding of comparison of Strings and do comparison on Path

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181809#comment-15181809
 ] 

Hive QA commented on HIVE-12244:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791208/HIVE-12244.9.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 476 failed/errored test(s), 9646 tests 
executed
*Failed tests:*
{noformat}
TestCliDriver-auto_join18_multi_distinct.q-interval_udf.q-authorization_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-auto_sortmerge_join_7.q-orc_createas1.q-encryption_join_with_different_encryption_keys.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-avro_decimal_native.q-alter_file_format.q-groupby3_map_skew.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-cte_4.q-nullscript.q-filter_join_breaktask.q-and-12-more - did 
not produce a TEST-*.xml file
TestCliDriver-groupby3_map.q-orc_merge9.q-alter1.q-and-12-more - did not 
produce a TEST-*.xml file
TestCliDriver-index_compact_2.q-vector_grouping_sets.q-lateral_view_cp.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-input17.q-auto_join25.q-udf_logic_java_boolean.q-and-12-more - 
did not produce a TEST-*.xml file
TestCliDriver-input44.q-drop_partitions_filter2.q-smb_mapjoin_4.q-and-12-more - 
did not produce a TEST-*.xml file
TestCliDriver-part_inherit_tbl_props_with_star.q-load_dyn_part2.q-truncate_table.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization_project
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_add_part_exist
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_index
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_2_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_show_grant
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join24
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join27

[jira] [Updated] (HIVE-13213) make DbLockManger work for non-acid resources

2016-03-05 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13213:
--
Status: Patch Available  (was: Open)

> make DbLockManger work for non-acid resources
> -
>
> Key: HIVE-13213
> URL: https://issues.apache.org/jira/browse/HIVE-13213
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13213.patch
>
>
> for example,
> insert into T values(...)
> if T is an ACID table we acquire Read lock
> but for non-acid table it should acquire Exclusive lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13213) make DbLockManger work for non-acid resources

2016-03-05 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13213:
--
Attachment: HIVE-13213.patch

> make DbLockManger work for non-acid resources
> -
>
> Key: HIVE-13213
> URL: https://issues.apache.org/jira/browse/HIVE-13213
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13213.patch
>
>
> for example,
> insert into T values(...)
> if T is an ACID table we acquire Read lock
> but for non-acid table it should acquire Exclusive lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13200) Aggregation functions returning empty rows on partitioned columns

2016-03-05 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-13200:

   Resolution: Fixed
Fix Version/s: 2.1.0
   1.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~sershe] for reviewing the code.
The fix is committed to master and branch-1

> Aggregation functions returning empty rows on partitioned columns
> -
>
> Key: HIVE-13200
> URL: https://issues.apache.org/jira/browse/HIVE-13200
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-13200.1.patch
>
>
> Running aggregation functions like MAX, MIN, DISTINCT against partitioned 
> columns will return empty rows if table has property: 
> 'skip.header.line.count'='1'
> Reproduce:
> {noformat}
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (a int) 
> PARTITIONED BY (b int) 
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' 
> TBLPROPERTIES('skip.header.line.count'='1');
> INSERT OVERWRITE TABLE test PARTITION (b = 1) VALUES (1), (2), (3), (4);
> INSERT OVERWRITE TABLE test PARTITION (b = 2) VALUES (1), (2), (3), (4);
> SELECT * FROM test;
> SELECT DISTINCT b FROM test;
> SELECT MAX(b) FROM test;
> SELECT DISTINCT a FROM test;
> {noformat}
> The output:
> {noformat}
> 0: jdbc:hive2://localhost:1/default> SELECT * FROM test;
> +-+-+--+
> | test.a  | test.b  |
> +-+-+--+
> | 2   | 1   |
> | 3   | 1   |
> | 4   | 1   |
> | 2   | 2   |
> | 3   | 2   |
> | 4   | 2   |
> +-+-+--+
> 6 rows selected (0.631 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT DISTINCT b FROM test;
> ++--+
> | b  |
> ++--+
> ++--+
> No rows selected (47.229 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT MAX(b) FROM test;
> +---+--+
> |  _c0  |
> +---+--+
> | NULL  |
> +---+--+
> 1 row selected (49.508 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT DISTINCT a FROM test;
> ++--+
> | a  |
> ++--+
> | 2  |
> | 3  |
> | 4  |
> ++--+
> 3 rows selected (46.859 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13200) Aggregation functions returning empty rows on partitioned columns

2016-03-05 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181756#comment-15181756
 ] 

Yongzhi Chen commented on HIVE-13200:
-

The failures are not related. 

> Aggregation functions returning empty rows on partitioned columns
> -
>
> Key: HIVE-13200
> URL: https://issues.apache.org/jira/browse/HIVE-13200
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-13200.1.patch
>
>
> Running aggregation functions like MAX, MIN, DISTINCT against partitioned 
> columns will return empty rows if table has property: 
> 'skip.header.line.count'='1'
> Reproduce:
> {noformat}
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (a int) 
> PARTITIONED BY (b int) 
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' 
> TBLPROPERTIES('skip.header.line.count'='1');
> INSERT OVERWRITE TABLE test PARTITION (b = 1) VALUES (1), (2), (3), (4);
> INSERT OVERWRITE TABLE test PARTITION (b = 2) VALUES (1), (2), (3), (4);
> SELECT * FROM test;
> SELECT DISTINCT b FROM test;
> SELECT MAX(b) FROM test;
> SELECT DISTINCT a FROM test;
> {noformat}
> The output:
> {noformat}
> 0: jdbc:hive2://localhost:1/default> SELECT * FROM test;
> +-+-+--+
> | test.a  | test.b  |
> +-+-+--+
> | 2   | 1   |
> | 3   | 1   |
> | 4   | 1   |
> | 2   | 2   |
> | 3   | 2   |
> | 4   | 2   |
> +-+-+--+
> 6 rows selected (0.631 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT DISTINCT b FROM test;
> ++--+
> | b  |
> ++--+
> ++--+
> No rows selected (47.229 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT MAX(b) FROM test;
> +---+--+
> |  _c0  |
> +---+--+
> | NULL  |
> +---+--+
> 1 row selected (49.508 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT DISTINCT a FROM test;
> ++--+
> | a  |
> ++--+
> | 2  |
> | 3  |
> | 4  |
> ++--+
> 3 rows selected (46.859 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13200) Aggregation functions returning empty rows on partitioned columns

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181755#comment-15181755
 ] 

Hive QA commented on HIVE-13200:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791205/HIVE-13200.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9767 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7168/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7168/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7168/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791205 - PreCommit-HIVE-TRUNK-Build

> Aggregation functions returning empty rows on partitioned columns
> -
>
> Key: HIVE-13200
> URL: https://issues.apache.org/jira/browse/HIVE-13200
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-13200.1.patch
>
>
> Running aggregation functions like MAX, MIN, DISTINCT against partitioned 
> columns will return empty rows if table has property: 
> 'skip.header.line.count'='1'
> Reproduce:
> {noformat}
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (a int) 
> PARTITIONED BY (b int) 
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' 
> TBLPROPERTIES('skip.header.line.count'='1');
> INSERT OVERWRITE TABLE test PARTITION (b = 1) VALUES (1), (2), (3), (4);
> INSERT OVERWRITE TABLE test PARTITION (b = 2) VALUES (1), (2), (3), (4);
> SELECT * FROM test;
> SELECT DISTINCT b FROM test;
> SELECT MAX(b) FROM test;
> SELECT DISTINCT a FROM test;
> {noformat}
> The output:
> {noformat}
> 0: jdbc:hive2://localhost:1/default> SELECT * FROM test;
> +-+-+--+
> | test.a  | test.b  |
> +-+-+--+
> | 2   | 1   |
> | 3   | 1   |
> | 4   | 1   |
> | 2   | 2   |
> | 3   | 2   |
> | 4   | 2   |
> +-+-+--+
> 6 rows selected (0.631 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT DISTINCT b FROM test;
> ++--+
> | b  |
> ++--+
> ++--+
> No rows selected (47.229 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT MAX(b) FROM test;
> +---+--+
> |  _c0  |
> +---+--+
> | NULL  |
> +---+--+
> 1 row selected (49.508 seconds)
> 0: jdbc:hive2://localhost:1/default> SELECT DISTINCT a FROM test;
> ++--+
> | a  |
> ++--+
> | 2  |
> | 3  |
> | 4  |
> ++--+
> 3 rows selected (46.859 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13188) Allow users of RetryingThriftClient to close transport

2016-03-05 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HIVE-13188:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks [~prongs]

> Allow users of RetryingThriftClient to close transport
> --
>
> Key: HIVE-13188
> URL: https://issues.apache.org/jira/browse/HIVE-13188
> Project: Hive
>  Issue Type: Task
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 2.1.0
>
> Attachments: HIVE-13188.02.patch, HIVE-13188.03.patch
>
>
> RetryingThriftCLIClient opens a TTransport and leaves it open. there should 
> be a way to close that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5370) format_number udf should take user specifed format as argument

2016-03-05 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HIVE-5370:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed. 

> format_number udf should take user specifed format as argument
> --
>
> Key: HIVE-5370
> URL: https://issues.apache.org/jira/browse/HIVE-5370
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Amareshwari Sriramadasu
>Assignee: Amareshwari Sriramadasu
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: D13185.1.patch, D13185.2.patch, HIVE-5370.2.patch, 
> HIVE-5370.3.patch, HIVE-5370.patch, HIVE-5370.patch
>
>
> Currently, format_number udf formats the number to #,###,###.##, but it 
> should also take a user specified format as optional input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13179) Allow custom HiveConf to be passed to Authentication Providers

2016-03-05 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HIVE-13179:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks [~prongs]

> Allow custom HiveConf to be passed to Authentication Providers
> --
>
> Key: HIVE-13179
> URL: https://issues.apache.org/jira/browse/HIVE-13179
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 2.1.0
>
> Attachments: HIVE-13179.05.patch, HIVE-13179.1.patch, 
> HIVE-13179.patch, HIVE-13179.patch
>
>
> Right now if I want to create an ldap auth provider, I have to create a 
> hive-site.xml, set endpoints and other relevant properties there, then 
> instantiate `LdapAuthenticationProviderImpl`, since inside the constructor, a 
> new HiveConf is constructed. 
> A better and more reusable design would be to ask for the conf in the 
> constructor itself. That will allow an external user to create a HiveConf, 
> set all relevant properties and instantiate `LdapAuthenticationProviderImpl` 
> with that conf. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11837) comments do not support unicode characters well.

2016-03-05 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181702#comment-15181702
 ] 

Yongzhi Chen commented on HIVE-11837:
-

The failures are not related. 
+1

> comments do not support unicode characters well.
> 
>
> Key: HIVE-11837
> URL: https://issues.apache.org/jira/browse/HIVE-11837
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.13.1, 1.1.0
> Environment: Hadoop 2.7
> Hive 0.13.1 / Hive 1.1.0
> RHEL 6.4 / SLES 11.3
>Reporter: Rudd Chen
>Assignee: Rudd Chen
>Priority: Minor
> Attachments: HIVE-11837.1.patch, HIVE-11837.patch
>
>
> the terminal encoding is set to UTF-8, It can display Chinese characters. 
> then I create a table with a comment in Chinese, both "show create table" and 
> "desc formatted table" can not display the Chinese characters in the table 
> comments, meanwhile it can display Chinese characters in column comment.. See 
> below:
> 0: jdbc:hive2://ha-cluster/default> create table tt(id int comment '列中文测试') 
> comment '表中文测试';
> No rows affected (0.152 seconds)
> 0: jdbc:hive2://ha-cluster/default> 
> 0: jdbc:hive2://ha-cluster/default> 
> 0: jdbc:hive2://ha-cluster/default> desc formatted tt;   
> +---+---+-+
> |   col_name|   data_type 
>   | comment |
> +---+---+-+
> | # col_name| data_type   
>   | comment |
> |   | NULL
>   | NULL|
> | id| int 
>   | 列中文测试   |
> |   | NULL
>   | NULL|
> | # Detailed Table Information  | NULL
>   | NULL|
> | Database: | default 
>   | NULL|
> | Owner:| admin   
>   | NULL|
> | CreateTime:   | Wed Sep 16 11:13:34 CST 2015
>   | NULL|
> | LastAccessTime:   | UNKNOWN 
>   | NULL|
> | Protect Mode: | None
>   | NULL|
> | Retention:| 0   
>   | NULL|
> | Location: | hdfs://hacluster/user/hive/warehouse/tt 
>   | NULL|
> | Table Type:   | MANAGED_TABLE   
>   | NULL|
> | Table Parameters: | NULL
>   | NULL|
> |   | comment 
>   | \u8868\u4E2D\u6587\u6D4B\u8BD5  |
> |   | transient_lastDdlTime   
>   | 1442373214  |
> |   | NULL
>   | NULL|
> | # Storage Information | NULL
>   | NULL|
> | SerDe Library:| 
> org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe  | NULL  
>   |
> | InputFormat:  | 
> org.apache.hadoop.hive.ql.io.RCFileInputFormat| NULL  
>   |
> | OutputFormat: | 
> org.apache.hadoop.hive.ql.io.RCFileOutputFormat   | NULL  
>   |
> | Compressed:   | No  
>   | NULL|
> | Num Buckets:  | -1  
>   | NULL|
> | Bucket Columns:   | []  
>   | NULL|
> | Sort Columns: | []   

[jira] [Commented] (HIVE-13179) Allow custom HiveConf to be passed to Authentication Providers

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181698#comment-15181698
 ] 

Hive QA commented on HIVE-13179:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791127/HIVE-13179.05.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 9764 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7167/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7167/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7167/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791127 - PreCommit-HIVE-TRUNK-Build

> Allow custom HiveConf to be passed to Authentication Providers
> --
>
> Key: HIVE-13179
> URL: https://issues.apache.org/jira/browse/HIVE-13179
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Attachments: HIVE-13179.05.patch, HIVE-13179.1.patch, 
> HIVE-13179.patch, HIVE-13179.patch
>
>
> Right now if I want to create an ldap auth provider, I have to create a 
> hive-site.xml, set endpoints and other relevant properties there, then 
> instantiate `LdapAuthenticationProviderImpl`, since inside the constructor, a 
> new HiveConf is constructed. 
> A better and more reusable design would be to ask for the conf in the 
> constructor itself. That will allow an external user to create a HiveConf, 
> set all relevant properties and instantiate `LdapAuthenticationProviderImpl` 
> with that conf. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5370) format_number udf should take user specifed format as argument

2016-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181653#comment-15181653
 ] 

Hive QA commented on HIVE-5370:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12791108/HIVE-5370.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 9781 tests executed
*Failed tests:*
{noformat}
TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not 
produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7166/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7166/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7166/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12791108 - PreCommit-HIVE-TRUNK-Build

> format_number udf should take user specifed format as argument
> --
>
> Key: HIVE-5370
> URL: https://issues.apache.org/jira/browse/HIVE-5370
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Amareshwari Sriramadasu
>Assignee: Amareshwari Sriramadasu
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: D13185.1.patch, D13185.2.patch, HIVE-5370.2.patch, 
> HIVE-5370.3.patch, HIVE-5370.patch, HIVE-5370.patch
>
>
> Currently, format_number udf formats the number to #,###,###.##, but it 
> should also take a user specified format as optional input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)