> On Feb. 28, 2013, 7:16 p.m., Vinod Kone wrote:
> > src/master/hierarchical_allocator_process.hpp, lines 96-100
> > <https://reviews.apache.org/r/6665/diff/7/?file=263291#file263291line96>
> >
> >     It might be just me, but I think swapping these variable names makes 
> > more sense?
> >     
> >     I understand you called it 'allocatable' because how its called in the 
> > allocator. At the least, I think available should be 'sufficient' to be 
> > consistent with the verb we use in the allocator.

I would rather not call it 'sufficient' since it captures both whether or not 
the slave has sufficient resources and whether or not the slave is whitelisted 
(so that we don't have to check the whitelisted flag for every slave on every 
allocation), but I agree that it makes more sense to swap the names, and I also 
updated to comment to make it clearer what's going on.


- Thomas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6665/#review17197
-----------------------------------------------------------


On March 1, 2013, 7:48 p.m., Thomas Marshall wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6665/
> -----------------------------------------------------------
> 
> (Updated March 1, 2013, 7:48 p.m.)
> 
> 
> Review request for mesos and Benjamin Hindman.
> 
> 
> Description
> -------
> 
> Currently, every time we do an allocation we have to traverse the entire list 
> of active slaves an check each one to see if its whitelisted and if it has 
> resources to allocate. This patch keeps a set of all slaves that meet those 
> requirements, and updates it when slaves are added/removed and when resources 
> are allocated/recovered.
> 
> Timing comparisons, using test_framework on a local cluster on my laptop 
> (each number is an average over 10 tests):
> 
> 100 slaves, 100 tasks
> without patch: 5.82s
> with patch: 5.773s
> improvement of about 1%
> 
> 1000 slaves, 100 tasks
> without patch: 8.261s
> with patch: 8.07s
> improvement of about 2%
> 
> Since this was a scalability issue, you'd presumably see a bigger improvement 
> in larger clusters but its difficult to simulate more than 1000 slaves or so 
> locally. If more convincing numbers are needed I can do some testing on EC2 
> (if nothing else, I'm hoping we'll have a Jenkins build with automated 
> performance tests set up later this semester, so I can test this in the 
> process of setting that up), but at the very least it seems clear that the 
> patch improves the runtime complexity of the allocation algorithm and doesn't 
> slow allocations down even in small clusters. 
> 
> 
> Diffs
> -----
> 
>   src/local/local.hpp 2633d25 
>   src/local/local.cpp 3402029 
>   src/master/allocator.hpp b68b67d 
>   src/master/drf_sorter.hpp 79566cc 
>   src/master/drf_sorter.cpp 33a8ec8 
>   src/master/hierarchical_allocator_process.hpp 33e059c 
>   src/master/main.cpp ca0abec 
>   src/master/master.hpp c9b4b3f 
>   src/master/master.cpp 814a6e1 
>   src/master/sorter.hpp 73db6b1 
>   src/tests/allocator_tests.cpp 44d72ad 
>   src/tests/allocator_zookeeper_tests.cpp 5f83dd7 
>   src/tests/fault_tolerance_tests.cpp 44eef03 
>   src/tests/gc_tests.cpp 411bcc0 
>   src/tests/master_detector_tests.cpp 5d09bab 
>   src/tests/master_tests.cpp e140f40 
>   src/tests/resource_offers_tests.cpp 5981191 
>   src/tests/sorter_tests.cpp 61e6038 
>   src/tests/utils.hpp 4600c2f 
> 
> Diff: https://reviews.apache.org/r/6665/diff/
> 
> 
> Testing
> -------
> 
> make check
> 
> 
> Thanks,
> 
> Thomas Marshall
> 
>

Reply via email to