belugabehr commented on a change in pull request #1742:
URL: https://github.com/apache/hive/pull/1742#discussion_r674896581
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/io/RecordReaderWrapper.java
##########
@@ -69,7 +70,14 @@ static RecordReader create(InputFormat inputFormat,
HiveInputFormat.HiveInputSpl
JobConf jobConf, Reporter reporter) throws IOException {
int headerCount = Utilities.getHeaderCount(tableDesc);
int footerCount = Utilities.getFooterCount(tableDesc, jobConf);
- RecordReader innerReader =
inputFormat.getRecordReader(split.getInputSplit(), jobConf, reporter);
+
+ RecordReader innerReader = null;
+ try {
+ innerReader = inputFormat.getRecordReader(split.getInputSplit(), jobConf,
reporter);
+ } catch (InterruptedIOException iioe) {
+ // If reading from the underlying record reader is interrupted, return a
no-op record reader
+ return new ZeroRowsInputFormat().getRecordReader(split.getInputSplit(),
jobConf, reporter);
Review comment:
Hey.
So, in my experimentation, this is the least-bad option. I did this to
preserve the previous behavior. The Hive code is not setup to handle this
error condition. As thing currently stand in `master`, if the calling Thread
was interrupted, the thread would finish fetching the rows regardless and then
just later ignore them (throw them away). The calling code does not handle
'null' return value and it does not handle this Exception. As currently
implemented in Hive `master`, if it gets an exception it simply exits execution
with an Error message, without implementing a lot more code, there is no way to
ignore/skip this one specific error type. So, the cleanest thing to do is to
return `ZeroRows` since it's going to be thrown away later anyway.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]