[
https://issues.apache.org/jira/browse/HAMA-420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13084630#comment-13084630
]
Thomas Jungblut commented on HAMA-420:
--------------------------------------
Okay started the crawler once again on amazon.
Going with 50sites/s it will be finished soon :)
@Edward, would you please take a look at the patch in WHIRR-355? Thanks.
> Generate random data for Pagerank example
> -----------------------------------------
>
> Key: HAMA-420
> URL: https://issues.apache.org/jira/browse/HAMA-420
> Project: Hama
> Issue Type: New Feature
> Components: examples
> Reporter: Thomas Jungblut
>
> As stated in comment on whirrs jira:
> https://issues.apache.org/jira/browse/WHIRR-355
> We should generate a big file (1-5gb?) for PageRank example. We wanted to add
> this as a part of the contrib, but we skipped/lost it somehow.
> I started crawling several pages, starting from google news. But then my free
> Amazon EC2 qouta expired and had to stop the crawl.
> > We need some cloud to crawl
> > We need a place to make the data available
> The stuff we need is already coded here:
> http://code.google.com/p/hama-shortest-paths/source/browse/#svn%2Ftrunk%2Fhama-gsoc%2Fsrc%2Fde%2Fjungblut%2Fcrawl
> Afterwards a m/r processing job in the subpackage "processing" has to be run
> on the output of the crawler. This job takes care that the adjacency matrix
> is valid.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira