leventov commented on a change in pull request #5913: Move Caching Cluster
Client to java streams and allow parallel intermediate merges
URL: https://github.com/apache/incubator-druid/pull/5913#discussion_r207754740
##
File path:
leventov commented on a change in pull request #5913: Move Caching Cluster
Client to java streams and allow parallel intermediate merges
URL: https://github.com/apache/incubator-druid/pull/5913#discussion_r207755440
##
File path:
leventov commented on a change in pull request #5913: Move Caching Cluster
Client to java streams and allow parallel intermediate merges
URL: https://github.com/apache/incubator-druid/pull/5913#discussion_r207755064
##
File path:
leventov commented on a change in pull request #5913: Move Caching Cluster
Client to java streams and allow parallel intermediate merges
URL: https://github.com/apache/incubator-druid/pull/5913#discussion_r207755399
##
File path:
leventov opened a new pull request #6112: Prohibit LinkedList
URL: https://github.com/apache/incubator-druid/pull/6112
All use cases for LinkedList are covered by ArrayList and ArrayDeque, which
are better options.
This is
clintropolis commented on a change in pull request #6107: Order rows during
incremental index persist when rollup is disabled.
URL: https://github.com/apache/incubator-druid/pull/6107#discussion_r207749482
##
File path:
pdeva commented on issue #6111: pulldeps tools pulls hadoop client for no reason
URL:
https://github.com/apache/incubator-druid/issues/6111#issuecomment-410550447
but why download hadoop client i the first place. i never asked for it.
the bug is regarding default behavior.
the flag
gianm commented on issue #6110: middle manager caching docs
URL: https://github.com/apache/incubator-druid/pull/6110#issuecomment-410550351
They use `druid.realtime.cache.*`, could you please update the doc to
reflect that?
gianm commented on issue #6111: pulldeps tools pulls hadoop client for no reason
URL:
https://github.com/apache/incubator-druid/issues/6111#issuecomment-410550299
I think you need to add `--no-default-hadoop` to skip downloading Hadoop. I
believe the help and/or docs for this command
gianm edited a comment on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410550232
> I have one example query for "wikipedia top pages" in the new tutorials
(the idea was more to show the tools/workflow vs being a
gianm commented on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410550232
> I have one example query for "wikipedia top pages" in the new tutorials
(the idea was more to show the tools/workflow vs being a tutorial
gianm commented on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410550178
(3) sounds like it's really the same question as (2), and TIME_FLOOR is the
answer. (4) adheres to standard SQL: result rows are only
pdeva commented on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410549461
@gianm clarifying:
3. show atleast one query that selects data that is not a singular value but
output as a time series. in all existing
jon-wei commented on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410548590
> maybe he can chime in with whether he had planned on adding SQL examples.
I have one example query for "wikipedia top pages" in
pdeva opened a new issue #6111: pulldeps tools pulls hadoop client for no reason
URL: https://github.com/apache/incubator-druid/issues/6111
to repro try this command:
```
java -cp "lib/*" -Ddruid.extensions.directory="extensions"
io.druid.cli.Main tools pull-deps -c
pdeva opened a new pull request #6110: middle manager caching docs
URL: https://github.com/apache/incubator-druid/pull/6110
i am unclear if the property names are `druid.middlemanager.cache.xxx` or
`druid.indexer.cache.xxx`...
gianm edited a comment on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410543659
Hi @pdeva, the answers to your questions are,
1. A time filter looks like `__time >= TIMESTAMP '2000-01-01 00:00:00' AND
gianm commented on issue #6108: more examples needed for druid sql
URL:
https://github.com/apache/incubator-druid/issues/6108#issuecomment-410543659
Hi @pdeva, the answers to your questions are,
1. A time filter looks like `__time >= TIMESTAMP '2000-01-01 00:00:00' AND
__time <
gianm commented on a change in pull request #6109: update redis-cache
documentation
URL: https://github.com/apache/incubator-druid/pull/6109#discussion_r207746697
##
File path: docs/content/development/extensions-contrib/redis-cache.md
##
@@ -8,9 +8,14 @@ Druid Redis
pdeva opened a new pull request #6109: update redis-cache documentation
URL: https://github.com/apache/incubator-druid/pull/6109
added clarifying info on setup and enablement
This is an automated message from the Apache Git
gianm commented on issue #6102: "Cannot have same delimiter and list delimiter
of \u0001"
URL:
https://github.com/apache/incubator-druid/issues/6102#issuecomment-410539099
Btw, I closed this since I think I answered your question, but feel free to
post again if you are wondering
gianm commented on issue #6102: "Cannot have same delimiter and list delimiter
of \u0001"
URL:
https://github.com/apache/incubator-druid/issues/6102#issuecomment-410539081
Hi @aoeiuvb,
This behavior is intentional and is due to the fact that we have two kinds
of delimiters: the
gianm closed issue #6102: "Cannot have same delimiter and list delimiter of
\u0001"
URL: https://github.com/apache/incubator-druid/issues/6102
This is an automated message from the Apache Git Service.
To respond to the
gianm commented on issue #6104: Webhdfs support for orc-extension
URL:
https://github.com/apache/incubator-druid/issues/6104#issuecomment-410538984
Hi @a2l007, I've used this extension with regular HDFS, but not with
WebHDFS. It looks like this method is called during `getSplits` which is
gianm commented on issue #6105: Allow sorting segments on some dims before time
URL:
https://github.com/apache/incubator-druid/issues/6105#issuecomment-410538507
I changed the title a bit, since we do allow specifying the sort order for
every column except __time (it's the order from
gianm commented on a change in pull request #6095: Add support
'keepSegmentGranularity' for compactionTask
URL: https://github.com/apache/incubator-druid/pull/6095#discussion_r207743527
##
File path:
indexing-service/src/main/java/io/druid/indexing/common/task/CompactionTask.java
gianm commented on a change in pull request #6095: Add support
'keepSegmentGranularity' for compactionTask
URL: https://github.com/apache/incubator-druid/pull/6095#discussion_r207743646
##
File path:
gianm commented on a change in pull request #6095: Add support
'keepSegmentGranularity' for compactionTask
URL: https://github.com/apache/incubator-druid/pull/6095#discussion_r207743558
##
File path:
indexing-service/src/main/java/io/druid/indexing/common/task/CompactionTask.java
gianm commented on a change in pull request #6107: Order rows during
incremental index persist when rollup is disabled.
URL: https://github.com/apache/incubator-druid/pull/6107#discussion_r207743167
##
File path:
gianm commented on a change in pull request #6107: Order rows during
incremental index persist when rollup is disabled.
URL: https://github.com/apache/incubator-druid/pull/6107#discussion_r207743101
##
File path:
asdf2014 commented on issue #6090: Fix missing exception handling as part of
`io.druid.java.util.http.client.netty.HttpClientPipelineFactory`
URL: https://github.com/apache/incubator-druid/pull/6090#issuecomment-410530313
Hi, @jihoonson . Thanks for your comments.
> Also, please
asdf2014 commented on a change in pull request #6090: Fix missing exception
handling as part of
`io.druid.java.util.http.client.netty.HttpClientPipelineFactory`
URL: https://github.com/apache/incubator-druid/pull/6090#discussion_r207740657
##
File path:
josephglanville commented on issue #5492: Native parallel batch indexing
without shuffle
URL: https://github.com/apache/incubator-druid/pull/5492#issuecomment-410498288
@jihoonson if I understand the semantics correctly if you want to create
segments with perfect rollup you can return
33 matches
Mail list logo