turcsanyip commented on a change in pull request #4750: URL: https://github.com/apache/nifi/pull/4750#discussion_r555945721
########## File path: nifi-nar-bundles/nifi-hive-bundle/nifi-hive_1_1-processors/src/main/java/org/apache/nifi/processors/hive/UpdateHive_1_1Table.java ########## @@ -562,7 +686,35 @@ private synchronized void checkAndUpdateTableSchema(final ProcessSession session outputPath = tableLocation + "/" + String.join("/", partitionColumnsLocationList); } - session.putAttribute(flowFile, ATTR_OUTPUT_PATH, outputPath); + // If updating field names, return a new RecordSchema, otherwise return null + OutputMetadataHolder outputMetadataHolder; + if (!tableCreated && updateFieldNames) { + List<RecordField> inputRecordFields = schema.getFields(); + List<RecordField> outputRecordFields = new ArrayList<>(); + Map<String,String> fieldMap = new HashMap<>(); + + for (RecordField inputRecordField : inputRecordFields) { + final String inputRecordFieldName = inputRecordField.getFieldName(); + boolean found = false; + for (String hiveColumnName : hiveColumns) { + if (inputRecordFieldName.equalsIgnoreCase(hiveColumnName)) { Review comment: If 'Update Fields Names' has been set to true, it always remap the records. Even if all the columns names are identical and the conversion would not be needed. It could be optimised to convert the records only if necessary. In that case the record reader and writer must have the same data format because some FFs will be converted while others will not. Actually, it is already a requirement for a create + update scenario (create never converts) and should be documented for the users. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org