Hi,
No, JDBC metadata is used always. I’ll try to create Junit test to reproduce problem. I don’t have MySQL so I will use Derby for testing. Hopefully MySQL JDBC driver does not have specific different behavior. toivo 2015-09-17 16:48 GMT+03:00 Jonathan Lyons <[email protected]>: > Hi, > > Thanks for the response. Indeed it looks like changing the query from: > > SELECT * from users > > > to: > > SELECT id, email from users > > > causes it to start working. Does the JDBC metadata get dropped when using > the column wildcard? > > Jonathan > > On Sat, Sep 12, 2015 at 4:59 AM, Toivo Adams <[email protected]> > wrote: > >> Hi, >> >> >> >> ExecuteSQL generates Avro schema automatically using JDBC metadata from >> query result. >> >> It seems number of columns in generated Avro schema and in row from >> ResultSet is different. >> >> >> >> Probably bug in ExecuteSQL. >> >> Please can you share your SQL select query and database table definition? >> >> And maybe even some sample data which causes the problem? >> >> >> >> Thanks >> >> Toivo >> >> >> >> 2015-09-11 18:43 GMT+03:00 Jonathan Lyons <[email protected]>: >> >>> Hi, >>> >>> Just getting started with NiFi here. I am attempting to run a static >>> query in MySQL using the ExecuteSQL processor. It is set to run on a 5 >>> second interval. Since ExecuteSQL appears to need an input flow file I'm >>> using a GenerateFlowFile processor to produce a random file every 5 >>> seconds. Unfortunately, I'm getting a very vague ArrayIndexOutOfBounds >>> exception when I hit play on the flow: >>> >>> java.lang.ArrayIndexOutOfBoundsException: 8 >>> >>> at org.apache.avro.generic.GenericData$Record.put(GenericData.java:129) >>> >>> at >>> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream >>> >>> at >>> org.apache.nifi.processors.standard.ExecuteSQL$1.process(ExecuteSQL.java:141) >>> ~[na:na] >>> >>> Any idea why this is? >>> >>> Thanks, >>> Jonathan >>> >> >> >
