On Tue, 29 Jan 2019 at 00:28, Gian Merlino wrote:
> It's a totally different situation if nobody else has reviewed a patch yet.
> In that case a reviewer reviewing things with longer cycles isn't blocking
> anything.
>
There is "Development Blocker" tag for such situations. What do you think
On Tue, 29 Jan 2019 at 01:30, Fangjin Yang wrote:
> I disagree with Roman's suggestions. If a PR has enough votes, we should
> trust the committers approving the PR and move forward.
>
There is a specific committer who merges a PR. If this happens while it's
not made clear that somebody who
Anyone planning to be at Fosdem this year? If enough of us are attending a
quick impromptu Druid gathering might be fun.
Hi, I have created an Issue together with @jon-wei, if anyone wants to
chime in:
https://github.com/apache/incubator-druid/issues/6949 (Create a proposal
template #6949)
On Tue, Jan 15, 2019 at 12:07 PM Jihoon Son wrote:
> Good point.
> If some authors raise PRs without noticing the need for a
Hi all,
An issue has been opened by a community member suggesting that we create a
template for proposals:
https://github.com/apache/incubator-druid/issues/6949
Having a template sounds convenient, and based on the discussion in this
thread, I'm suggesting we adopt something based on the Kafka
Thanks Eyal and Jon for starting the discussion about making a template!
The KIP template looks good, but I would like to add one more.
The current template is:
- Motivation
- Public Interfaces
- Proposed Changes
- Compatibility, Deprecation, and Migration Plan
- Test Plan
- Rejected
I think it'd also be nice to tweak a couple parts of the KIP template
(Motivation; Public Interfaces; Proposed Changes; Compatibility,
Deprecation, and Migration Plan; Test Plan; Rejected Alternatives). A
couple people have suggested adding a "Rationale" section, how about adding
that and removing
We noticed that it takes a long time for the historicals to download
segments from deep storage (in our case S3). Looking closer at the code in
ZKCoordinator, I noticed that the segment download is happening in a single
threaded fashion. This download happens in the SingleThreadedExecutor
service
I believe today, if you use the (experimental) HTTP-based load queues, they
will parallelize segment downloads. Adding similar functionality for the
ZK-based load queues would definitely be useful though, since at this time
nobody seems to be actively driving a migration to HTTP-based load queues
I *think* the HTTP coordination already enables this
On Wed, Jan 30, 2019 at 4:20 PM Samarth Jain wrote:
> We noticed that it takes a long time for the historicals to download
> segments from deep storage (in our case S3). Looking closer at the code in
> ZKCoordinator, I noticed that the
10 matches
Mail list logo