exceptionfactory commented on code in PR #10366:
URL: https://github.com/apache/nifi/pull/10366#discussion_r2403954452
##########
nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java:
##########
@@ -58,7 +58,11 @@ public void flush() throws IOException {
@Override
public Map<String, String> writeRecord(final Record record) throws
IOException {
final GenericRecord rec = AvroTypeUtil.createAvroRecord(record,
schema);
- dataFileWriter.append(rec);
+ try {
+ dataFileWriter.append(rec);
+ } catch (final DataFileWriter.AppendWriteException e) {
+ throw new IOException(e);
Review Comment:
Reviewing the call structure, I favor the proposed approach that catches the
`AppendWriteException` and throws something more specific. Wrapping it and
throwing an `IOException` seems appropriate based on the description of
`AppendWriteException`, although I would add a message to the `IOException`.
For broader context, the `JdbcCommon` handling of `dataWriter.append()` is
not directly related, and in that case, catching `AppendWriteException` only
serves to allow for more specific exception messaging.
The contract of `RecordReaderFactory.createRecordReader()` defines the three
checked exceptions, which the `KafkaMessageConverter` handles as parse
failures. Any other exceptions propagate to `ConsumeKafka.onTrigger()`, where
the transaction is rolled back. For this reason, catching a general `Exception`
as a parse failure could mask other issues that indicate a programming bug,
versus a problem with the record or schema.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]