Github user parthchandra commented on the issue:
https://github.com/apache/drill/pull/518
+1. Looks like there has been enough review and there is good enough reason
to merge this in
---
If your project is set up for it, you can reply to this email and have your
reply appear on
We have a 12 nodes cluster and a 220 nodes cluster, but they do not talk
to each other. So Padma's analysis do not apply but thanks for your
comments. Our goal had been to run Drill on the 220 nodes cluster after it
proved worthy of it on the small cluster.
planner.width.max_per_node was
Github user parthchandra commented on the issue:
https://github.com/apache/drill/pull/593
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user Ben-Zvi commented on the issue:
https://github.com/apache/drill/pull/560
+1 Could possibly extend the same use of the ConstantValueHolderCache
for visitor methods of other types; though this would make the code more
cluttered.
---
If your project is set up for it,
Seems like you have 215 nodes, but the data for your query is there on only 12
nodes.
Drill tries to distribute the scan fragments across the cluster more uniformly
(trying to utilize all CPU resources).
That is why you have lot of remote reads going on and increasing affinity
factor
I am surprised that it's not the default.
On Fri, Oct 14, 2016 at 11:18 AM, Sudheesh Katkam
wrote:
> Hi Francois,
>
> Thank you for posting your findings! Glad to see a 10X improvement.
>
> By increasing affinity factor, looks like Drill’s parallelizer is forced
> to
GitHub user laurentgo opened a pull request:
https://github.com/apache/drill/pull/618
DRILL-4945: Report INTERVAL exact type as column type name
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/laurentgo/drill laurent/DRILL-4945
Hi Francois,
Thank you for posting your findings! Glad to see a 10X improvement.
By increasing affinity factor, looks like Drill’s parallelizer is forced to
assign fragments on nodes with data i.e. with high favorability for data
locality.
Regarding the random disconnection, I agree with your
Github user parthchandra commented on the issue:
https://github.com/apache/drill/pull/605
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user joeswingle opened a pull request:
https://github.com/apache/drill/pull/617
Drill 4934
Pull Request for DRILL-4934
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/Hobsons/drill DRILL-4934
Alternatively you can review
GitHub user lvxin1986 opened a pull request:
https://github.com/apache/drill/pull/616
fix the default drill directory in zookeeper
Fix the bug:correct the drill directory in zookeeper to dirll from
Drill,because the value is case Sensitive.
You can merge this pull request into a
11 matches
Mail list logo