On 9/13/05, Eric Walker <[EMAIL PROTECTED]> wrote:
> On Tuesday 13 September 2005 09:55 am, Jeff 'japhy' Pinyan wrote:
> > On Sep 13, Bob Showalter said:
> > > I need help with an algorithm. I'm writing a program that sends a
> > > repeated pattern of requests to a service. Each request has a "weight"
> > > that controls the relative frequency with which I need to send that
> > > particular request.
> > >
> > >   foo => 1
> > >   bar => 1
> > >   qux => 6
> > >
> > >
> > >   foo
> > >   qux qux qux
> > >   bar
> > >   qux qux qux
> > >
> > > Now I have only intervals of 0 or 1 between successive "qux", instead of
> > > an interval of 2 as in the previous case.
> > >
> > > As an extreme example, if I had a dozen requests with a weight of 1 and a
> > > final request with a weight of 12, I would "starve" the highly-weighted
> > > request until the first 12 had been sent.
> >
> > The extreme cases are the easy ones, though.  What I'd like to see are
> > cases like:
> >
> >    foo => 1
> >    bar => 2
> >    qux => 3
> >    baz => 4
> >    zip => 5
> >
> > Once I know what the algorithm's outcome should be for something like
> > that, I think I can develop it.
> >
> > --
> > Jeff "japhy" Pinyan        %  How can we ever be the sold short or
> > RPI Acacia Brother #734    %  the cheated, we who for every service
> > http://www.perlmonks.org/  %  have long ago been overpaid?
> > http://princeton.pm.org/   %    -- Meister Eckhart
> 
> yes, how would you want to split zip which is odd..  Also, with your extreme
> case you said you would do all the singles first and do the 12 weighted one
> last. that goes against your primary objective which is to distribute the
> service request evenly . In that case I would expect the one with weight of
> 12 to be distributed among the single weighted request..
> if foo<n>=1 and bar = 12 then
> foo<n>
> bar
> foo<n>
> bar
> etc.......

This sort of thing is unfortunately application specific, and depends
a lot on what your goal is. If you do a web search for wieghted round
robin or load balancing, you'll get a lot of food for thought. If you
really need to do careful scheduling--say for load balancing--you'll
need to take into account not just requests but open connections, and
the (unfortunately C, I think) code in the wikipedia weighted round
robin entry is reasonable.

On the other hand, if all you just need to ensure an *average*
response time over a significant number of iterations, and your goal
is statistical distribution not real-world load managment, randomizing
will serve you just as well and be much easier to implement:

#!/usr/bin/perl

use warnings;
use strict;
use List::Util qw(shuffle);

my %weights = ( 'foo' => 12,
                'bar' => 7,
                'baz' => 2,
                'quux' => 6,
                'zip' => 1);

my @a;

while (1) {
    while (my($k, $v) = each %weights) {
        push(@a, $k) for 1..$v;
    }

    foreach (shuffle(@a)) {
        # do something
    }
}

HTH,

-- jay
--------------------------------------------------
This email and attachment(s): [  ] blogable; [ x ] ask first; [  ]
private and confidential

daggerquill [at] gmail [dot] com
http://www.tuaw.com  http://www.dpguru.com  http://www.engatiki.org

values of β will give rise to dom!

Reply via email to