MikeThomsen commented on a change in pull request #3468: NIFI-6289: using
charset for byte encoding in ExecuteSparkInteractive
URL: https://github.com/apache/nifi/pull/3468#discussion_r283059197
##########
File path:
nifi-nar-bundles/nifi-spark-bundle/nifi-livy-processors/src/main/java/org/apache/nifi/processors/livy/ExecuteSparkInteractive.java
##########
@@ -184,13 +184,7 @@ public void onTrigger(ProcessContext context, final
ProcessSession session) thro
return;
}
final long statusCheckInterval =
context.getProperty(STATUS_CHECK_INTERVAL).evaluateAttributeExpressions(flowFile).asTimePeriod(TimeUnit.MILLISECONDS);
- Charset charset;
- try {
- charset =
Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(flowFile).getValue());
- } catch (Exception e) {
- log.warn("Illegal character set name specified, defaulting to
UTF-8");
- charset = StandardCharsets.UTF_8;
- }
+ Charset charset =
Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(flowFile).getValue());
Review comment:
I agree that it shouldn't silently default to UTF-8, but there should be a
validator set on the property to enforce that at configuration time. Other than
that LGTM.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services