[ https://issues.apache.org/jira/browse/NIFI-2625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431383#comment-15431383 ]
Matt Burgess commented on NIFI-2625: ------------------------------------ This work looks related to the "other half" of NIFI-1613, added a link so everyone can get on the same page :) > ConvertJsonToSql truncates SQL timestamp values > ----------------------------------------------- > > Key: NIFI-2625 > URL: https://issues.apache.org/jira/browse/NIFI-2625 > Project: Apache NiFi > Issue Type: Bug > Affects Versions: 0.7.0 > Environment: Ubuntu 16.04 > Reporter: Charles Bryan Clifford > Labels: newbie, patch > Original Estimate: 4h > Remaining Estimate: 4h > > The ConvertJsonToSql processor is incorrectly initializing colSize. > In the ColumnDescription constructor method, ResultSet.getInt("COLUMN_SIZE") > is used to initialize colSize. That method appear to return a size of a Java > TimeStamp data type (which is in milliseconds). > When generateInsert and generateUpdate parse the in-flowing JSON field node's > text value in this manner: > fieldValue = fieldValue.substring(0, colSize); > And, next, do the following: > attributes.put("sql.args." + fieldCount + ".value", fieldValue); > Nanoseconds in timestamp field values, present in the in-flowing JSON > content, are not making it into the timestamp field values stored in > sql.args.N.value FlowFile attribute. > My source timestamp values are in nano seconds, and all target database > timestamp columns are likewise in nano seconds. > For your consideration, here's a potential fix to the > ConvertJsonToSql.ColumnDescription timestamp value truncation problem: > public static ColumnDescription from(final ResultSet resultSet) throws > SQLException { > final ResultSetMetaData md = resultSet.getMetaData(); > List<String> columns = new ArrayList<>(); > HashMap<String,Int> columncache = new HashMap<String,Int>(); // NEW - used to > store column size, as per database service > for (int i = 1; i < md.getColumnCount() + 1; i++) { > columns.add(md.getColumnName(i)); > columncache.put(md.getColumnName(i),md.getPrecision(i)); // NEW - get > physical column size as per the database service > } > final String columnName = resultSet.getString("COLUMN_NAME"); > final int dataType = resultSet.getInt("DATA_TYPE"); > //final int colSize = resultSet.getInt("COLUMN_SIZE"); > final int colSize = columncache.get(columnName); // NEW > In this way, the target data type lengths used by the database service (not > by Java) will be used to initalize colSize. This could fix the timestamp > value truncation problem, and other Java-data-type-size conflicts with target > database-service-data-type-size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)