[ 
https://issues.apache.org/jira/browse/HIVE-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14199956#comment-14199956
 ] 

Hive QA commented on HIVE-8744:
-------------------------------



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12679708/HIVE-8744.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6674 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_mapjoin_reduce
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1657/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1657/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1657/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12679708 - PreCommit-HIVE-TRUNK-Build

> hbase_stats3.q test fails when paths stored at 
> JDBCStatsUtils.getIdColumnName() are too large
> ---------------------------------------------------------------------------------------------
>
>                 Key: HIVE-8744
>                 URL: https://issues.apache.org/jira/browse/HIVE-8744
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.15.0
>            Reporter: Sergio Peña
>            Assignee: Sergio Peña
>         Attachments: HIVE-8744.1.patch
>
>
> This test is related to the bug HIVE-8065 where I am trying to support HDFS 
> encryption. One of the enhancements to support it is to create a 
> .hive-staging directory on the same table directory location where the query 
> is executed.
> Now, when running the hbase_stats3.q test from a temporary directory that has 
> a large path, then the new path, a combination of table location + 
> .hive-staging + random temporary subdirectories, is too large to fit into the 
> statistics table, so the path is truncated.
> This causes the following error:
> {noformat}
> 2014-11-04 08:57:36,680 ERROR [LocalJobRunner Map Task Executor #0]: 
> jdbc.JDBCStatsPublisher (JDBCStatsPublisher.java:publishStat(199)) - Error 
> during publishing statistics. 
> java.sql.SQLDataException: A truncation error was encountered trying to 
> shrink VARCHAR 
> 'pfile:/home/hiveptest/hive-ptest-cloudera-slaves-ee9-24.vpc.&' to length 255.
>       at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeLargeUpdate(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown 
> Source)
>       at 
> org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher$2.run(JDBCStatsPublisher.java:148)
>       at 
> org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher$2.run(JDBCStatsPublisher.java:145)
>       at 
> org.apache.hadoop.hive.ql.exec.Utilities.executeWithRetry(Utilities.java:2667)
>       at 
> org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher.publishStat(JDBCStatsPublisher.java:161)
>       at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.publishStats(FileSinkOperator.java:1031)
>       at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:870)
>       at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:579)
>       at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:591)
>       at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:591)
>       at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:591)
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227)
>       at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>       at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>       at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:744)
> Caused by: java.sql.SQLException: A truncation error was encountered trying 
> to shrink VARCHAR 
> 'pfile:/home/hiveptest/hive-ptest-cloudera-slaves-ee9-24.vpc.&' to length 255.
>       at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
>       at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
>  Source)
>       ... 30 more
> Caused by: ERROR 22001: A truncation error was encountered trying to shrink 
> VARCHAR 'pfile:/home/hiveptest/hive-ptest-cloudera-slaves-ee9-24.vpc.&' to 
> length 255.
>       at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>       at org.apache.derby.iapi.types.SQLChar.hasNonBlankChars(Unknown Source)
>       at org.apache.derby.iapi.types.SQLVarchar.normalize(Unknown Source)
>       at org.apache.derby.iapi.types.SQLVarchar.normalize(Unknown Source)
>       at org.apache.derby.iapi.types.DataTypeDescriptor.normalize(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.execute.NormalizeResultSet.normalizeColumn(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.execute.NormalizeResultSet.normalizeRow(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.execute.NormalizeResultSet.getNextRowCore(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.execute.DMLWriteResultSet.getNextRowCore(Unknown 
> Source)
>       at org.apache.derby.impl.sql.execute.InsertResultSet.open(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown Source)
>       at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown 
> Source)
>       ... 24 more
> {noformat}
> We should increment the size of the VARCHAR datatype in order to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to