Hello list, I'm porting recrawl script to use hadoop (on an already existing hadoop cluster). I attach my version.
What i found out is that Indexer and SolrIndexer want a list of segments. It's difficult to obtain the content of a directory through hdfs (/craw/segments/* will be expanded by bash and hadoop dfs -ls will return the content with details such as permissions, owners and dates), so I wrote these little patches to add the -dir option like SegmentMerger and LinkDB. They are attached too. They might be of interest for somebody else. -- Claudio Martella Digital Technologies Unit Research & Development - Analyst TIS innovation park Via Siemens 19 | Siemensstr. 19 39100 Bolzano | 39100 Bozen Tel. +39 0471 068 123 Fax +39 0471 068 129 [email protected] http://www.tis.bz.it Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to [email protected] in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.
multipass-hadoop-crawler.sh
Description: Bourne shell script
28a29
> import org.apache.hadoop.fs.FileStatus;
39a41
> import org.apache.nutch.util.HadoopFSUtil;
96c98,99
<
---
> Configuration conf = NutchConfiguration.create();
> final FileSystem fs = FileSystem.get(conf);
98,99c101,110
< for (int i = 3; i < args.length; i++) {
< segments.add(new Path(args[i]));
---
> if(args[3].equals("-dir")){
> FileStatus[] fstats = fs.listStatus(new Path(args[4]),
> HadoopFSUtil.getPassDirectoriesFilter(fs));
> Path[] files = HadoopFSUtil.getPaths(fstats);
> for (int j = 0; j < files.length; j++)
> segments.add(files[j]);
> } else {
> for (int i = 3; i < args.length; i++) {
> segments.add(new Path(args[i]));
> }
28a29,30
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
34a37
> import org.apache.nutch.util.HadoopFSUtil;
78c84
< System.err.println("Usage: Indexer <index> <crawldb> <linkdb> <segment>
...");
---
> System.err.println("Usage: Indexer <index> <crawldb> <linkdb> (-dir
> <segments> | <segment> ...)");
85c91,92
<
---
> Configuration conf = NutchConfiguration.create();
> final FileSystem fs = FileSystem.get(conf);
87,88c94,103
< for (int i = 3; i < args.length; i++) {
< segments.add(new Path(args[i]));
---
> if(args[3].equals("-dir")){
> FileStatus[] fstats = fs.listStatus(new Path(args[4]),
> HadoopFSUtil.getPassDirectoriesFilter(fs));
> Path[] files = HadoopFSUtil.getPaths(fstats);
> for (int j = 0; j < files.length; j++)
> segments.add(files[j]);
> } else {
> for (int i = 3; i < args.length; i++) {
> segments.add(new Path(args[i]));
> }

