[
https://issues.apache.org/jira/browse/HIVE-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12662074#action_12662074
]
Johan Oskarsson commented on HIVE-217:
--------------------------------------
Looking into this a bit further the exception is actually caused by the reduce
task timing out:
"Task attempt_200901071012_0373_r_000031_0 failed to report status for 616
seconds. Killing!"
This in turn closes the FileSystem/stream and the FileSink keeps trying to
write, causing the exception seen in the log.
Now the question is why the task fails to report status.
> Stream closed exception
> -----------------------
>
> Key: HIVE-217
> URL: https://issues.apache.org/jira/browse/HIVE-217
> Project: Hadoop Hive
> Issue Type: Bug
> Components: Serializers/Deserializers
> Environment: Hive from trunk, hadoop 0.18.2, ~20 machines
> Reporter: Johan Oskarsson
> Priority: Critical
> Fix For: 0.2.0
>
> Attachments: HIVE-217.log
>
>
> When running a query similar to the following:
> "insert overwrite table outputtable select a, b, cast(sum(counter) as INT)
> from tablea join tableb on (tablea.username=tableb.username) join tablec on
> (tablec.userid = tablea.userid) join tabled on (tablec.id=tabled.id) where
> insertdate >= 'somedate' and insertdate <= 'someotherdate' group by a, b;"
> Where one table is ~40gb or so and the others are a couple of hundred mb. The
> error happens in the first mapred job that processes the 40gb.
> I get the following exception (see attached file for full stack trace):
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> java.io.IOException: Stream closed.
> at
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:162)
> It happens in one reduce task and is reproducible, running the same query
> gives the error.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.