[ 
https://issues.apache.org/jira/browse/NUTCH-443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12476357
 ] 

Doğacan Güney commented on NUTCH-443:
-------------------------------------

Hi Andrzej,

> * in my opinion it's easier to add missing CrawlDatum's (with correctly set 
> fetch time) for the new urls to the 
> output rather than work-around this by passing around the fetch time in 
> metadata, and then again 
> compensating in Indexer and CrawlDbReducer for the lack of these fetchDatum-s.

I guess what you mean is pushing STATUS_FETCH_SUCCESS datums to crawl_parse, 
right? I can probably do this by changing ParseImpl and adding a new boolean to 
it that indicates whether it is fetched or freshly generated in parse.

> * in Fetcher / Fetcher2 you don't pass the signature in case when there is no 
> valid Parse output, but in the 
> current versions of Fetchers the signature is still calculated and passed in 
> datum.setSignature() (which ends 
> up in crawl_fetch). 

OK, I will fix it.

> * using a generic Map<String, Parse> is IMHO inappropriate, as I indicated 
> earlier, especially since this Map > requires special post-processing in 
> ParseUtil.processParseMap - and what would happen if I didn't use 
> ParseUtil? I think this calls for a special-purpose class (ParseResult?), 
> which would encapsulate this 
> behavior without exposing it to its users (or even worse - allowing users to 
> bypass it). This class would also > help us to avoid somewhat ugly 
> "convenience" methods in ParseStatus and ParseImpl - these details would > be 
> hidden in one of the constructors of ParseResult. 

> * I'm also not sure why we use Map<String, Parse> and not Map<Text, Parse>, 
> since in all further 
> processing we need to create Text objects ...

If we are going with a special-purpose class, there is one more thing I would 
like to change. Consider the case of a zip archive with url 
http://foo.bar/baz.zip that contains two files spam.txt, egg.txt. After parsing 
this you will return something like <key1, parse of spam.txt>, <key2, parse of 
egg.txt> and perhaps <original_url, who knows what>. 

Now, whatever key1 and key2 is, they are not really urls to be fetched. So I 
want to add another fetch and db status (let's call them STATUS_FETCH_FAKE and 
STATUS_DB_FAKE). During parse key1 and key2 will be written with FETCH_FAKE, 
and updatedb will write them as DB_FAKE to crawldb. Nutch will still index 
things with FAKE status, but generate will never generate them to be fetched. 
And updatedb will never change their status to DB_UNFETCHED(since, as I said 
before, they can't be fetched).

So, ParseResult will contain a group of <'real' url, parse> and a group of 
<'phony' url, parse> pairs.

What do you think?

> * the new section in HtmlParseFilters breaks the loop on encountering the 
> first error, and leaves the parse 
> results incompletely filtered. It should simply continue - the result is an 
> aggregation of more or less 
> independent documents that are parsed on their own. 

This is the same as the old behavior. Why change it? (There was a bug there, 
but I fixed in one of the newer patches)

> * the comment about redirects in Parser.java is misplaced - I think this 
> contract should be both defined and > enforced in the Fetcher. 

OK.

> And finally, I think this is a significant change in the way how content 
> parsers work with the rest of the 
> framework, so we should wait with this patch after the 0.9 release - and we 
> should push 0.9 out of the door 
> really soon ...

Anything to get 0.9 out of the door:)

I will send an updated patch that fixes 1,2 and 4 (and 5 if I am missing 
something there) tomorrow unless someone beats me to it. I want to hear what 
others think on 3 before doing anything.

Thanks for your review and comments.


> allow parsers to return multiple Parse object, this will speed up the rss 
> parser
> --------------------------------------------------------------------------------
>
>                 Key: NUTCH-443
>                 URL: https://issues.apache.org/jira/browse/NUTCH-443
>             Project: Nutch
>          Issue Type: New Feature
>          Components: fetcher
>    Affects Versions: 0.9.0
>            Reporter: Renaud Richardet
>         Assigned To: Chris A. Mattmann
>            Priority: Minor
>             Fix For: 0.9.0
>
>         Attachments: NUTCH-443-draft-v1.patch, NUTCH-443-draft-v2.patch, 
> NUTCH-443-draft-v3.patch, NUTCH-443-draft-v4.patch, NUTCH-443-draft-v5.patch, 
> NUTCH-443-draft-v6.patch, NUTCH-443-draft-v7.patch, 
> NUTCH-443.022507.patch.txt, parse-map-core-draft-v1.patch, 
> parse-map-core-untested.patch, parsers.diff
>
>
> allow Parser#parse to return a Map<String,Parse>. This way, the RSS parser 
> can return multiple parse objects, that will all be indexed separately. 
> Advantage: no need to fetch all feed-items separately.
> see the discussion at 
> http://www.nabble.com/RSS-fecter-and-index-individul-how-can-i-realize-this-function-tf3146271.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Nutch-developers mailing list
Nutch-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nutch-developers

Reply via email to