Each jobs are ran many times with different start nodes, and the results were not much different from this, of course in same cluster. Let's just add a link to random graph generator source code. Decisively, I have no server to upload these files. :)
Additionally, I want to re-test very large graph using 0.4 release candidates later. By the way, PageRank is fixed? On Thu, Dec 15, 2011 at 3:28 PM, Thomas Jungblut <[email protected]> wrote: > Let's add graphs, no one is going to read tables in benchmarks. > > Regards, > Thomas > > ---------- Forwarded message ---------- > From: Apache Wiki <[email protected]> > Date: 2011/12/15 > Subject: [Hama Wiki] Trivial Update of "Benchmarks" by edwardyoon > To: Apache Wiki <[email protected]> > > > Dear Wiki user, > > You have subscribed to a wiki page or wiki category on "Hama Wiki" for > change notification. > > The "Benchmarks" page has been changed by edwardyoon: > http://wiki.apache.org/hama/Benchmarks?action=diff&rev1=21&rev2=22 > > + == Single Shortest Path Problem == > + > + * Experimental environments > + * One rack (16 nodes 256 cores) cluster > + * Hadoop 0.20.2 > + * Hama TRUNK r1213634. > + * 10G network > + > + ||Vertices (x10 edges)|| Tasks|| Supersteps|| Job Execution Time|| > + ||10 million|| 6|| 5423|| 656.393 seconds|| > + ||20 million|| 12|| 2231|| 449.542 seconds|| > + ||30 million|| 18|| 4398|| 886.845 seconds|| > + ||40 million|| 24|| 5432|| 1112.912 seconds|| > + ||50 million|| 30|| 10747|| 2079.262 seconds|| > + ||60 million|| 36|| 8158|| 1754.935 seconds|| > + ||70 million|| 42|| 20634|| 4325.141 seconds|| > + ||80 million|| 48|| 14356|| 3236.194 seconds|| > + ||90 million|| 54|| 11480|| 2785.996 seconds|| > + ||100 million|| 60|| 7679|| 2169.528 seconds|| > + > + > == Random Communication Benchmark == > > == 4 racks == > > > > -- > Thomas Jungblut > Berlin <[email protected]> -- Best Regards, Edward J. Yoon @eddieyoon
