I'm putting the answer in line.

On Thu, Jan 19, 2012 at 6:22 PM, Avery Ching <ach...@apache.org> wrote:

> Not sure if Semih is on the giraph-dev list.  Forwarding the question to
> him.
> Avery
> P.S.  Interesting idea if I understand correctly, attaching the compute
> functionality to an aggregator that the master will run between supersteps?
> On 1/19/12 1:20 PM, Claudio Martella wrote:
>> Hi Semih,
>> interesting email. I'm probably not getting your technique right, but
>> why wouldn't it be possible to compute the master.compute() inside of
>> an aggregator?
>> Not only it *should* be possible, but as aggregators are computed both
>> on workers AND on the master, you should have a faster computation.
>> for instance you could aggregate the number of cut edges on each
>> worker and aggregate the total number on the master. Same could happen
>> for choosing the centroids.
> This is exactly how you would count the number of edges, I'm not
suggesting something else in my example. Actually if you look at the GPS
code for Kmeans from the link I sent, that's exactly how it's done.
Master.compute() is meant for something else. One thing it's meant for is
expressing sequential parts of an algorithm. Consider the checking of the
condition inside the while() in the pseudocode from the simple k-means
 while ((numEdgesCrossingCluster>  numEdgesThreshold)&&  iterationNo<
 maxIterations) { ... }

In order to do this simple condition check, the system somehow has to
understand that a) it's time to do that check, b) have access to the
aggregator that was used to count "numEdgesCrossingClusters" c) if that
condition fails, the computation should stop (to keep my email shorter, i'm
going to skip this last thing). An aggregator is only a global object,
which can be updated by vertices and the programmer defines only how
aggregation on that object should be done. By itself, it can't know when to
do that condition check. So in order to run this simple line of code, a lot
of global information needs to be available. As Avery pointed out in his
response, no matter where the programmer wants to express this condition
check, it has to have access to other aggregators. It also has to have
access to non-aggregator global data: in this example, iterationNo is a
global data but it should not be aggregated as it's not useful information
for the vertices.

As for picking the k random cluster centers, this cannot be done in an
aggregator either. I agree that after the cluster centers are picked, they
have to be put inside an aggregator, so that it's available to the workers.
An aggregator is more the storage location of the centers, not the location
where it's picked. The action of picking of these vertices needs to be done
in exactly one place, not per worker or per vertex. As I try to explain in
the previous email, the picking can be done in one of the preSuperstep()
methods in a special worker, but this would waste an entire superstep. In
general master.compute() would be the place where any kind of global,
non-vertex-centric computation is expressed, and it would be where the
programmer stores global information that is not used by the vertices and
hence should not be aggregated. I think the exact purpose might be clearer
in the actual GPS example for k-means than my explanation.

>> On Thu, Jan 19, 2012 at 9:52 PM, Semih Salihoglu (Created) (JIRA)
>> <j...@apache.org>  wrote:
>>> Extending the API with a master.compute() function.
>>> ------------------------------**---------------------
>>>                 Key: GIRAPH-127
>>>                 URL: 
>>> https://issues.apache.org/**jira/browse/GIRAPH-127<https://issues.apache.org/jira/browse/GIRAPH-127>
>>>             Project: Giraph
>>>          Issue Type: New Feature
>>>          Components: bsp, examples, graph
>>>            Reporter: Semih Salihoglu
>>> First of all, sorry for the long explanation to this feature.
>>> I want to expand the API of Giraph with a new function called
>>> master.compute(), that would get called at the master before each superstep
>>> and I will try to explain the purpose that it would serve with an example.
>>> Let's say we want to implement the following simplified version of the
>>> k-means clustering algorithm. Pseudocode below:
>>>  * Input G(V, E), k, numEdgesThreshold, maxIterations
>>>  * Algorithm:
>>>  * int numEdgesCrossingClusters = Integer.MAX_INT;
>>> *  int iterationNo = 0;
>>>  * while ((numEdgesCrossingCluster>  numEdgesThreshold)&&  iterationNo<
>>>  maxIterations) {
>>>  *    iterationNo++;
>>>  *    int[] clusterCenters = pickKClusterCenters(k, G);
>>>  *    findClusterCenters(G, clusterCenters);
>>>  *    numEdgesCrossingClusters = countNumEdgesCrossingClusters(**);
>>>  * }
>>> The algorithm goes through the following steps in iterations:
>>> 1) Pick k random initial cluster centers
>>> 2) Assign each vertex to the cluster center that it's closest to (in
>>> Giraph, this can be implemented in message passing similar to how
>>> ShortestPaths is implemented):
>>> 3) Count the nuimber of edges crossing clusters
>>> 4) Go back to step 1, if there are a lot of edges crossing clusters and
>>> we haven't exceeded maximum number of iterations yet.
>>> In an algorithm like this, step 2 and 3 are where most of the work
>>> happens and both parts have very neat message-passing implementations. I'll
>>> try to give an overview without going into the details. Let's say we define
>>> a Vertex in Giraph to hold a custom Writable object that holds 2 integer
>>> values and sends a message with upto 2 integer values.
>>> Step 2 is very similar to ShortestPaths algorithm and has two stages: In
>>> the first stage, each vertex checks to see whether or not it's one of the
>>> cluster centers. If so, it assigns itself the value (id, 0), otherwise it
>>> assigns itself (Null, Null). In the 2nd stage, the vertices assign
>>> themselves to the minimum distance cluster center by looking at their
>>> neighbors (cluster centers, distance) values (received as 2 integer
>>> messages) and their current values, and changing their values if they find
>>> a lower distance cluster center. This happens in x number of supersteps
>>> until every vertex converges.
>>> Step 3, counting the number of edges crossing clusters, is also very
>>> easy to implement in Giraph. Once each vertex has a cluster center, the
>>> number of edges crossing clusters can be counted by an aggregator, let's
>>> say called "num-edges-crossing". It would again have two stages: First
>>> stage, every vertex just sends its cluster id to all its neighbors. Second
>>> stage, every vertex looks at their neighbors' cluster ids in the messages,
>>> and for each cluster id that is not equal to its own cluster id, it
>>> increments "num-edges-crossing" by 1.
>>> The other 2 steps, step 1 and 4, are very simple sequential
>>> computations. Step 1 just picks k random vertex ids and puts it into an
>>> aggregator. Step 4 just compares "num-edges-crossing" by a threshold and
>>> also checks whether or not the algorithm has exceeded maxIterations (not
>>> supersteps but iterations of going through Steps 1-4). With the current
>>> API, it's not clear where to do these computations. There is a per worker
>>> function preSuperstep() that can be implemented, but if we decide to pick a
>>> special worker, let's say worker 1,  to pick the k vertices then we'd waste
>>> an entire superstep where only worker 1 would do work, (by picking k
>>> vertices  in preSuperstep() and put them into an aggregator), and all other
>>> workers would be idle. Trying to do this in worker 1 in postSuperstep()
>>> would not work either because, worker 1 needs to know that all the vertices
>>> have converged to understand that it's time to pick k vertices or it's time
>>> do check in step 4, which would only be available to it in the beginning of
>>> the next superstep.
>>> A master.compute() extension would run at the master and before the
>>> superstep and would modify the aggregator that would keep the k vertices
>>> before the aggregators are broadcast to the workers, which are all very
>>> short sequential computations, so they would not waste resources the way a
>>> preSuperstep() or postSuperstep() approach would do. It would also enable
>>> running new algorithms like kmeans that are composed of very vertex-centric
>>> computations glued together by small sequential ones. It would basically
>>> boost Giraph with sequential computation in a non-wasteful way.
>>> I am a phd student at Stanford and I have been working on my own
>>> BSP/Pregel implementation since last year. It's called GPS. I haven't
>>> distributed it, mainly because in September I learned about Giraph and I
>>> decided to slow down on working on it :). We have basically been using GPS
>>> as our own research platform. The source code for GPS is here if any one is
>>> interested (https://subversion.assembla.**com/svn/phd-projects/gps/**
>>> trunk/ <https://subversion.assembla.com/svn/phd-projects/gps/trunk/>).
>>> We have the master.compute() feature in GPS, and here's an example of
>>> KMeans implementation in GPS with master.compute(): (
>>> https://subversion.assembla.**com/svn/phd-projects/gps/**
>>> trunk/src/java/gps/examples/**kmeans/<https://subversion.assembla.com/svn/phd-projects/gps/trunk/src/java/gps/examples/kmeans/>).
>>> (Aggregators are called GlobalObjects in GPS). There is another example (
>>> https://subversion.assembla.**com/svn/phd-projects/gps/**
>>> trunk/src/java/gps/examples/**randomgraphcoarsening/<https://subversion.assembla.com/svn/phd-projects/gps/trunk/src/java/gps/examples/randomgraphcoarsening/>),
>>> which I'll skip explaining because it's very detailed and would make the
>>> similar points that I am trying to make with k-means. Master.compute() in
>>> general would make it possible to glue together any graph algorithm that is
>>> composed of multiple stages with different message types and computations
>>> that is conducive to run with vertex.compute(). There are many examples of
>>> such algorithms: recursive partitioning, triangle counting, even much
>>> simpler things like finding shortests paths for 100 vertices in pieces
>>> (first to 5 vertices, then to another 5, then to another 5, etc..), which
>>> would be good because trying to find shortests paths to 100 vertices
>>> require a very large messages (would need to store 100 integers per
>>> message)).
>>> If the Giraph team approves, I would like to take a similar approach in
>>> implementing this feature in Giraph as I've done in GPS. Overall:
>>> Add a Master.java to org.apache.giraph.graph, that is default Master,
>>> with a compute function that by default aggregates all aggregators and does
>>> the check of whether or not the computation has ended (by comparining
>>> numVertices with numFinishedVertices). This would be a refactoring of
>>> org.apache.giraph.graph.**BspServiceMaster class (as far as I can see).
>>> Extend GiraphJob to have a setMaster() method to set a master class (by
>>> default it would be the default master above)
>>> The rest would be sending the custom master class to probably all
>>> workers but only the master would instantiate it with reflection. I need to
>>> learn more on how to do these, I am not familiar with that part of the
>>> Giraph code base yet.
>>> --
>>> This message is automatically generated by JIRA.
>>> If you think it was sent incorrectly, please contact your JIRA
>>> administrators: https://issues.apache.org/**jira/secure/**
>>> ContactAdministrators!default.**jspa<https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa>
>>> For more information on JIRA, see: http://www.atlassian.com/**
>>> software/jira <http://www.atlassian.com/software/jira>

Reply via email to