Skip Cave-3 wrote:
> 
> On 4/16/2011 7:25 AM, Viktor Cerovski wrote:
>> Here is one partitioning:
>>
>>     B0=:484 504 284 291 223 80 320 295
>>     B1=:696 107 371 175 144 650 337
>>     B2=:111 771 443 954 201
>>     B3=:937 777 767
>>     B4=:761 131 733 855
>>     B5=:772 903 804
>>     B6=:729 754 758 143 43 53
>>     B7=:729 322 832 154 443
>>     B8=:450 215 701 195 918
>>     B9=:788 698 858 137
>>
>> with the bucket sums:
>>
>>     >+/&.>B0;B1;B2;B3;B4;B5;B6;B7;B8;B9
>> 2481 2480 2480 2481 2480 2479 2480 2480 2479 2481
> 
> Nice, Victor! Are you claiming that this is actually the 
> smallest-difference solution to Marshall's 50-mass 10-bucket problem, or 
> that it is just one of the solutions that is fairly close to the minimum 
> solution? 
> 
Thanks. The latter--I posted the best one I found, and I don't know
whether there is a better solution.


> Did you examine all other solutions that come close, and 
> selected this one as the minimal-difference solution?
> Is your approach all programmatic, or are there manual steps involved?  
> How large of a problem will it scale to? Can your approach solve the 
> 200-mass 10-bucket problem I posed earlier?
> 
Skip, that's a lot of questions. My approach is nothing special,
I generate a bunch of random partitions and prune them using a fast
criterion.  The best of them are further optimized, again using simple
but much slower heuristics.  Each of the two phases take about 50%
of total time.  I could put it in a program now, but there are still few 
parameters involved that I don't know how to relate to #m and B. 
Plus it's not very fast.


> I'm sill looking for a scalable approach to the general #m-mass 
> nb-bucket problem. Do you think that there is a general solution that 
> would scale just linearly with #m?
> 
I don't know.  Since my simple approach somehow works, 
in the sense that does find decent approximate solutions, my main 
conclusion so far is that there must be a *lot* of decent solutions,
and very likely a lot of best solutions.  The problem is not just
to make it scale, but also to find particularly complicated configurations
for a given M,B (i.e distribution of masses does play a role), and test
any particular algorithm on them.

What does a decent approx. solution mean? Say we are designing 
thread scheduler where B is the number of processors, and masses
are times allocated for each thread's execution, then the above
solution is decent since it has idle time of 7%2481~0.3%. Plus the
time to find the solution :-)  In this sense quick finding of 
such approximate solutions is also interesting.

-- 
View this message in context: 
http://old.nabble.com/Weight-distribution-problem-tp31365679s24193p31414926.html
Sent from the J Programming mailing list archive at Nabble.com.

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to