[
https://issues.apache.org/jira/browse/HAMA-420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13088044#comment-13088044
]
Thomas Jungblut commented on HAMA-420:
--------------------------------------
The filecreator mixed the order so I had to regenerate a new file which has
basically the same properties.
I'm going to upload this now.
I'll continue working on HAMA-423. I've written a whole new partitioner and
refactored the examples to use it. It is actually much much more faster than
the old one :))). The MR job which I told of in the last post is already
written too, but I don't want to add a JobTracker dependecy to Hama.
> Generate random data for Pagerank example
> -----------------------------------------
>
> Key: HAMA-420
> URL: https://issues.apache.org/jira/browse/HAMA-420
> Project: Hama
> Issue Type: New Feature
> Components: examples
> Reporter: Thomas Jungblut
>
> As stated in comment on whirrs jira:
> https://issues.apache.org/jira/browse/WHIRR-355
> We should generate a big file (1-5gb?) for PageRank example. We wanted to add
> this as a part of the contrib, but we skipped/lost it somehow.
> I started crawling several pages, starting from google news. But then my free
> Amazon EC2 qouta expired and had to stop the crawl.
> > We need some cloud to crawl
> > We need a place to make the data available
> The stuff we need is already coded here:
> http://code.google.com/p/hama-shortest-paths/source/browse/#svn%2Ftrunk%2Fhama-gsoc%2Fsrc%2Fde%2Fjungblut%2Fcrawl
> Afterwards a m/r processing job in the subpackage "processing" has to be run
> on the output of the crawler. This job takes care that the adjacency matrix
> is valid.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira