[
https://issues.apache.org/jira/browse/NIFI-3898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16012335#comment-16012335
]
Koji Kawamura commented on NIFI-3898:
-------------------------------------
[~egortsaryk9] Thank you very much for sharing the all logs and screenshots. I
found following stack trace in nifi-app.log, this indicates that the issue
happens inside of PostgreSQL JDBC Driver:
{code}
Caused by: java.lang.ArrayIndexOutOfBoundsException: null
at
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:597)
at java.lang.StringBuilder.append(StringBuilder.java:190)
at org.postgresql.core.Parser.parseSql(Parser.java:1032)
at org.postgresql.core.Parser.replaceProcessing(Parser.java:972)
at
org.postgresql.core.CachedQueryCreateAction.create(CachedQueryCreateAction.java:41)
at
org.postgresql.core.CachedQueryCreateAction.create(CachedQueryCreateAction.java:17)
at org.postgresql.util.LruCache.borrow(LruCache.java:115)
at
org.postgresql.core.QueryExecutorBase.borrowQuery(QueryExecutorBase.java:266)
at org.postgresql.jdbc.PgConnection.borrowQuery(PgConnection.java:143)
at
org.postgresql.jdbc.PgPreparedStatement.<init>(PgPreparedStatement.java:88)
at
org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:1256)
at
org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:1622)
at
org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:415)
at
org.apache.commons.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:281)
at
org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.prepareStatement(PoolingDataSource.java:313)
at
org.apache.nifi.processors.standard.PutSQL$StatementFlowFileEnclosure.getCachedStatement(PutSQL.java:1070)
at
org.apache.nifi.processors.standard.PutSQL.lambda$null$5(PutSQL.java:284)
at
org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
... 19 common
{code}
Would you be able to try the latest PostgreSQL driver available here?
https://jdbc.postgresql.org/download.html
Hopefully 42.1.1 JDBC 42 will address the issue. Thanks!
> PutSql - flow files get stuck in incoming queue if there are incorrect INSERT
> statements in flow files
> ------------------------------------------------------------------------------------------------------
>
> Key: NIFI-3898
> URL: https://issues.apache.org/jira/browse/NIFI-3898
> Project: Apache NiFi
> Issue Type: Bug
> Affects Versions: 1.2.0
> Reporter: yahor tsaryk
> Labels: putsql
> Attachments: 6463044572292-data (flow file content), Controller
> Service.png, Flow Files Listing.png, nifi-app.log (REPORT),
> nifi-bootstrap.log (REPORT), nifi-user.log (REPORT), PutSql Properties.png,
> PutSql Scheduling.png, PutSql Settings.png, Screen Shot 2017-05-15 at
> 21.25.46.png, Screen Shot 2017-05-15 at 21.25.54.png, UI interface.png
>
>
> Hi everybody, I just updated to 1.2.0 version and I found that if incoming
> flow file for PutSql processor contains incorrect sql INSERT statement (such
> as
> INSERT INTO public.my_table(id, data)
> VALUES('220f27c5-ce2f-4ab4-8bdd-fc9187d36783', '
> '{"my_jsonb_data_field":"some data"}') ON CONFLICT DO NOTHING; - the
> statement contains redundant apostrophe character) the flow files just get
> stuck in the incoming queue, they haven't being directed to "Failure" queue.
> I don't use "Rollback On Failure" feature - it is set to false. "Support
> Fragmented Transactions" option is also set to false.
> Also I tried to set "Batch Size" value to 1 but result is the same as with
> default batch size value (100). (It fails with
> "java.lang.ArayIndexOutOfBoundsException")
> Shouldn't incoming flow files with incorrect/broken sql statements be
> directed to the "Failure" relationship automatically ? Current behavior
> reminds like "Rollback On Failure" feature is set to true, but I just want to
> filter out incorrect sql insert statements.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)