kirkvicent opened a new issue #7198: batch igesting data from hadoop
URL: https://github.com/apache/incubator-druid/issues/7198
 
 
   **my durid version is 0.12.0
   my config file is like** 
        "parser" : {
           "type" : "avro_hadoop",
           "parseSpec" : {
             "format" : "avro",
             "timestampSpec" : {
               "column" : "ORDERED_DATE",
               "format" : "nano"
             },
             "dimensionsSpec" : {
               "dimensions": [
                  {"type": "long","name": "ORDERED_DATE"      }
                        ],
               "dimensionExclusions" : [],
               "spatialDimensions" : []
             }
           }
         }
   **Regardless of what I specify timestampSpec  in  the configuration file 
,and i have try other column for several times ,durid always going to find 
column name timestamp , and root exception info is**
   Caused by: java.lang.NullPointerException: Null timestamp in input: {}
        at 
io.druid.data.input.impl.MapInputRowParser.parseBatch(MapInputRowParser.java:67)
 ~[druid-api-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.data.input.avro.AvroParsers.parseGenericRecord(AvroParsers.java:61) 
~[druid-avro-extensions-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.data.input.AvroHadoopInputRowParser.parseBatch(AvroHadoopInputRowParser.java:51)
 ~[druid-avro-extensions-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.data.input.AvroHadoopInputRowParser.parseBatch(AvroHadoopInputRowParser.java:31)
 ~[druid-avro-extensions-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.segment.transform.TransformingInputRowParser.parseBatch(TransformingInputRowParser.java:50)
 ~[druid-processing-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.indexer.HadoopDruidIndexerMapper.parseInputRow(HadoopDruidIndexerMapper.java:110)
 ~[druid-indexing-hadoop-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:68) 
~[druid-indexing-hadoop-0.12.0-iap7.jar:0.12.0-iap7]
        at 
io.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityMapper.run(DetermineHashedPartitionsJob.java:283)
 ~[druid-indexing-hadoop-0.12.0-iap7.jar:0.12.0-iap7]
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) 
~[hadoop-mapreduce-client-core-2.7.3.jar:?]
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
~[hadoop-mapreduce-client-core-2.7.3.jar:?]
        at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
 ~[hadoop-mapreduce-client-common-2.7.3.jar:?]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_171]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_171]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_171]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_171]
        at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_171]
   
   **any help will be appreciated,thanks !**

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to