Found these errors in the Docker logs:

postgres_1       | 2018-04-05 18:33:22.183 UTC [51] ERROR:  column
"timestamp_field" is of type timestamp without time zone but expression is
of type bigint at character 282
postgres_1       | 2018-04-05 18:33:22.183 UTC [51] HINT:  You will need to
rewrite or cast the expression.
postgres_1       | 2018-04-05 18:33:22.183 UTC [51] STATEMENT:  INSERT INTO
provenance (componentid, componentname, componenttype, details, entityid,
entitysize, entitytype, eventid, eventtype, processgroupid,
processgroupname, record_count, schema_name, timestamp_field) VALUES
($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14)
nifi2            | 2018-04-05 18:33:22,184 WARN [Timer-Driven Process
Thread-2] o.a.n.p.standard.PutDatabaseRecord
PutDatabaseRecord[id=dfdd3dd1-21ee-16e2-09cc-159b7c7f8f54] Failed to
process
StandardFlowFileRecord[uuid=8cf3b521-ac0b-4149-a60e-5fa7b2d2b3c5,claim=StandardContentClaim
[resourceClaim=StandardResourceClaim[id=1522953007606-749,
container=default, section=749], offset=434600,
length=528],offset=0,name=4306158395541,size=528] due to
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO provenance
(componentid, componentname, componenttype, details, entityid, entitysize,
entitytype, eventid, eventtype, processgroupid, processgroupname,
record_count, schema_name, timestamp_field) VALUES
('8d08a7d3-0162-1000-7216-7e0b426e774a','EvaluateJsonPath','EvaluateJsonPath',NULL,'26a1efa3-bcd3-4015-8a43-2a2c43c05714',66307,'org.apache.nifi.flowfile.FlowFile','e841aa4e-a7d3-4a6e-b78c-4e785f53b60e','ATTRIBUTES_MODIFIED','8c5c89d5-0162-1000-4395-a38d1c7b0b2f','Mongo
ES Test',NULL,NULL,1522952968586) was aborted: ERROR: column
"timestamp_field" is of type timestamp without time zone but expression is
of type bigint

I have the following DDL and Avro schema:

create table provenance (
id serial,
componentId varchar(128),
componentName varchar(256),
componentType varchar(128),
details varchar(256),
entityId varchar(256),
entitySize int,
entityType varchar(128),
eventId varchar(128),
eventType varchar(128),
processGroupId varchar(128),
processGroupName varchar(128), record_count int,
schema_name varchar(64),
timestamp_field timestamp
)

{
"type": "record",
"name": "ProvenanceEvent",
"fields": [
{ "name": "componentId", "type": ["null", "string"] },
{ "name": "componentName", "type": ["null", "string"] },
{ "name": "componentType", "type": ["null", "string"] },
{ "name": "details", "type": ["null", "string"] },
{ "name": "entityId", "type": ["null", "string"] },
{ "name": "entitySize", "type": ["null", "int"] },
{ "name": "entityType", "type": ["null", "string"] },
{ "name": "eventId", "type": ["null", "string"] },
{ "name": "eventType", "type": ["null", "string"] },
{ "name": "processGroupId", "type": ["null", "string"] },
{ "name": "processGroupName", "type": ["null", "string"] },
{ "name": "record_count", "type": ["null", "int"] },
{ "name": "schema_name", "type": ["null", "string"] },
{ "name": "timestamp_field", "type": "long", "logicalType":
"timestamp-millis" }
]
}

I set the JsonTreeReader to use this formatting option for reading in the
original ISO8601 string:

yyyy-MM-dd'T'HH:mm:ssZ

Everything looks fine until it gets to the processor. Any ideas?

Thanks,

Mike

Reply via email to