[
https://issues.apache.org/jira/browse/DRILL-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16017708#comment-16017708
]
ASF GitHub Bot commented on DRILL-5512:
---------------------------------------
Github user sudheeshkatkam commented on a diff in the pull request:
https://github.com/apache/drill/pull/838#discussion_r117531676
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
---
@@ -173,9 +174,8 @@ public IterOutcome next() {
currentReader.allocate(mutator.fieldVectorMap());
} catch (OutOfMemoryException e) {
- logger.debug("Caught Out of Memory Exception", e);
clearFieldVectorMap();
- return IterOutcome.OUT_OF_MEMORY;
+ throw UserException.memoryError(e).build(logger);
--- End diff --
The non-managed external sort spills to disk in case it receives this
outcome. I do not know if there are other operators that handle this outcome.
Are all the pre-requisite changes (to handle this change) already committed?
> Standardize error handling in ScanBatch
> ---------------------------------------
>
> Key: DRILL-5512
> URL: https://issues.apache.org/jira/browse/DRILL-5512
> Project: Apache Drill
> Issue Type: Improvement
> Affects Versions: 1.10.0
> Reporter: Paul Rogers
> Assignee: Paul Rogers
> Priority: Minor
> Labels: ready-to-commit
> Fix For: 1.10.0
>
>
> ScanBatch is the Drill operator executor that handles most readers. Like most
> Drill operators, it uses an ad-hoc set of error detection and reporting
> methods that evolved over Drill development.
> This ticket asks to standardize on error handling as outlined in DRILL-5083.
> This basically means reporting all errors as a {{UserException}} rather than
> using the {{IterOutcome.STOP}} return status or using the
> {{FragmentContext.fail()}} method.
> This work requires the new error codes introduced in DRILL-5511, and is a
> step toward making readers aware of vector size limits.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)