The CSVReader, if using "Use String Fields From Header" as the Schema Access Strategy, will treat all fields as Strings since there is no good way of knowing what value(s) the records/rows will contain for each column. In that case you'd need to know the schema(s) of the possible CSV files coming in, add them to an AvroSchemaRegistry instance, and use a different Schema Access Strategy.
If there is a way to tell from the filename or something in the content which schema corresponds to it, you could use RouteOnAttribute and/or RouteOnContent to send the different CSV files down different paths, where you could set the "avro.schema" attribute to the schema explicitly, or if they are all in the AvroSchemaRegistry you could set the "schema.name" attribute to the corresponding name in the AvroSchemaRegistry. Then all the paths could join back up to a single ConvertRecord, or each branch could have its own. In either case, the attribute you set would correspond to the Schema Access Strategy you select in your CSVReader. Regards, Matt On Thu, May 18, 2017 at 2:10 AM, [email protected] <[email protected]> wrote: > Hi, > Thanks for your help . It worked. > > I used the following processors > GetFile-->PutDatabaseRecord-->PutFile > > Only doubt i have is how to specify particular dataType for a column. > > For Eg: I have file like below and want to insert into the table with City > as Varchar and Count as Integer in Postgres. > > City,Count > Mumbai,10 > Mumbai,10 > Pune,10 > Pune,10 > > > > -- > View this message in context: > http://apache-nifi-developer-list.39713.n7.nabble.com/How-to-use-ConvertRecord-Processor-tp15873p15901.html > Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
