ekovacs commented on a change in pull request #3468: NIFI-6289: using charset
for byte encoding in ExecuteSparkInteractive
URL: https://github.com/apache/nifi/pull/3468#discussion_r283193813
##########
File path:
nifi-nar-bundles/nifi-spark-bundle/nifi-livy-processors/src/main/java/org/apache/nifi/processors/livy/ExecuteSparkInteractive.java
##########
@@ -184,13 +184,7 @@ public void onTrigger(ProcessContext context, final
ProcessSession session) thro
return;
}
final long statusCheckInterval =
context.getProperty(STATUS_CHECK_INTERVAL).evaluateAttributeExpressions(flowFile).asTimePeriod(TimeUnit.MILLISECONDS);
- Charset charset;
- try {
- charset =
Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(flowFile).getValue());
- } catch (Exception e) {
- log.warn("Illegal character set name specified, defaulting to
UTF-8");
- charset = StandardCharsets.UTF_8;
- }
+ Charset charset =
Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(flowFile).getValue());
Review comment:
hello @MikeThomsen ,
the property descriptor already set up with both:
`required` and the `default value` besides the `validator` for the CHARSET
field.
thus did i remove the catch clause, in which where it could never end up in,
with the all the above property descriptor configs (default, required and
validator)
https://github.com/apache/nifi/blob/535d65370fa3b225b59bd681ccbe4b8cf898abb3/nifi-nar-bundles/nifi-spark-bundle/nifi-livy-processors/src/main/java/org/apache/nifi/processors/livy/ExecuteSparkInteractive.java#L97-L105
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services