KenjiTakahashi opened a new issue #2106: Zero filling does not work in 
timeseries queries.
URL: https://github.com/apache/incubator-druid/issues/2106
 
 
   That's using Druid 0.8.2.
   
   An example query and result:
   
   ``` bash
   (virtualenv)2015-12-16 23:38:27
   monitowl-dev:root:/vagrant> cat query2.json
   {
       "filter": {
           "fields": [
               {
                   "type": "selector",
                   "dimension": "<cut>",
                   "value": "<cut>"
               },
               {
                   "type": "selector",
                   "dimension": "<cut>",
                   "value": "<cut>"
               }
           ],
           "type": "and"
       },
       "intervals": "2015-11-04T03:41:58.042000/2015-11-11T22:20:14.927000",
       "dataSource": "logdb_data",
       "granularity": {
           "origin": "1970-01-01T00:00:00",
           "type": "period",
           "period": "PT50000S"
       },
       "postAggregations": [
           {
               "fields": [
                   {
                       "fieldName": "sum",
                       "type": "fieldAccess",
                       "name": "sum"
                   },
                   {
                       "fieldName": "count",
                       "type": "fieldAccess",
                       "name": "count"
                   }
               ],
               "type": "arithmetic",
               "name": "result",
               "fn": "/"
           }
       ],
       "queryType": "timeseries",
       "aggregations": [
           {
               "type": "count",
               "name": "count"
           },
           {
               "fieldName": "data_num",
               "type": "doubleSum",
               "name": "sum"
           }
       ]
   }
   2015-12-16 23:38:30 cat query2.json
   (virtualenv)2015-12-16 23:38:30
   monitowl-dev:root:/vagrant> curl -X POST 
'http://localhost:10000/druid/v2/?pretty' -H 'content-type: application/json' 
[email protected]
   [ {
     "timestamp" : "2015-11-04T01:20:00.000Z",
     "result" : {
       "result" : 0.0,
       "count" : 0,
       "sum" : 0.0
     }
   }, {
     "timestamp" : "2015-11-04T15:13:20.000Z",
     "result" : {
       "result" : 0.0,
       "count" : 0,
       "sum" : 0.0
     }
   }, {
     "timestamp" : "2015-11-10T10:06:40.000Z",
     "result" : {
       "result" : 0.0,
       "count" : 0,
       "sum" : 0.0
     }
   }, {
     "timestamp" : "2015-11-11T00:00:00.000Z",
     "result" : {
       "result" : 0.0,
       "count" : 0,
       "sum" : 0.0
     }
   }, {
     "timestamp" : "2015-11-11T13:53:20.000Z",
     "result" : {
       "result" : 0.0,
       "count" : 0,
       "sum" : 0.0
     }
   } ]
   2015-12-16 23:38:32 curl -X POST 'http://localhost:10000/druid/v2/?pretty' 
-H 'content-type: application/json' [email protected]
   ```
   
   This uses period of 50000S, which is almost 14 hours.
   See that between 2nd and 3rd result, there's ~5 days gap with no data and it 
is not filled with zeroes.
   
   Explicitly adding
   
   ``` json
   "context": {
       "skipEmptyBuckets": "false"
   }
   ```
   
   does not change anything.
   
   And, strangely enough, setting `"skipEmptyBuckets": "true"` returns... 
nothing (i.e. empty array). Maybe that's because it discards "our" zeroes as 
well? I'll need to find (or produce) a gap within data that do not evaluate to 
0 and test it further.
   
   BTW: I've seen this also reported on ML 
(https://groups.google.com/forum/#!topic/druid-user/3SfgJ7t001s) recently, but 
no conclusion was made there.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to