[
https://issues.apache.org/jira/browse/TRAFODION-2352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15655586#comment-15655586
]
ASF GitHub Bot commented on TRAFODION-2352:
-------------------------------------------
GitHub user DaveBirdsall opened a pull request:
https://github.com/apache/incubator-trafodion/pull/830
[TRAFODION-2352] UPDATE STATS may fail with error 8446 on Hive tables
There are two fixes here.
1. Fix for JIRA TRAFODION-2352: Propagate the CQD
HIVE_MAX_STRING_LENGTH_IN_BYTES to the grandchild tdm_arkcmp process if needed
to avoid possible error 8446 on Hive tables with very long varchar columns.
2. Fix an unrelated minor problem: Warning 9234 may occur
non-deterministically in test scripts for incremental UPDATE STATS. A previous
pull request, https://github.com/apache/incubator-trafodion/pull/719, had
changed most such warnings so they are emitted only when UPDATE STATS logging
is 'ON'. I missed a couple; this change takes care of them.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/DaveBirdsall/incubator-trafodion Trafodion2352
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/incubator-trafodion/pull/830.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #830
----
commit d2e7567b0b1e0b090ac0bfed1f5fa4183bc6f376
Author: Dave Birdsall <[email protected]>
Date: 2016-11-10T23:51:58Z
[TRAFODION-2352] UPDATE STATS may fail with error 8446 on Hive tables
----
> UPDATE STATS may fail with error 8446 on Hive tables
> ----------------------------------------------------
>
> Key: TRAFODION-2352
> URL: https://issues.apache.org/jira/browse/TRAFODION-2352
> Project: Apache Trafodion
> Issue Type: Bug
> Components: sql-cmp
> Affects Versions: 2.1-incubating
> Environment: All
> Reporter: David Wayne Birdsall
> Assignee: David Wayne Birdsall
>
> UPDATE STATISTICS may fail on a Hive table with very long varchar columns.
> The symptom is an error 8446 "An error occurred during hdfs buffer fetch.
> Error Detail: No record delimiter found in buffer from hdfsRead." This
> happens when reading data from the table, and the query plan for the read
> happens to be a parallel plan.
> The problem relates to CQD HIVE_MAX_STRING_LENGTH_IN_BYTES. This defaults to
> 32000. When a Hive table has a column with a value having a length longer
> than this, we might get this error even though the user may have issued a CQD
> for this with a larger value.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)