Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Nutch Wiki" for change 
notification.

The following page has been changed by AndrzejBialecki:
http://wiki.apache.org/nutch/FixingOpicScoring

------------------------------------------------------------------------------
   1. Each page also has a "historical cash value" that represents the cash 
that's flowed through the page. Initially this starts out as 0.0.
   1. The score of a page is represented by the sum of the historical and 
current cash values.
   1. There is one special, virtual root page that has bidirectional links with 
every other page in the entire web graph. 
-   a. When a crawl is initially started, the root page has a cash value of 
1.0, and this is then distributed (as 1/n) to the n injected pages.
+   a. When a crawl is initially started, the root page has a cash value of 
1.0, and this is then distributed (as 1/n) to the n injected pages. ''(What 
happens when more pages are injected?)''
    a. Whenever a page is being processed, the root page can receive some of 
the page's current cash, due to the implicit link from every page to the root 
page.
-  1. To handle recrawling, every page also has the last time it was processed. 
In addition, there's a fixed "time window" that's used to calculate the 
historical cash value of a page. For the Xyleme crawler, this was set at 3 
months, but it seems to be heavily dependent on the rate of re-crawling 
(average time between page refetches).
+  1. To handle recrawling, every page also has the last time it was processed. 
In addition, there's a fixed "time window" that's used to calculate the 
historical cash value of a page. For the Xyleme crawler, this was set at 3 
months, but it seems to be heavily dependent on the rate of re-crawling 
(average time between page refetches). ''We could use a value derived from 
fetchInterval.''
   1. When a page is being processed, its historical cash value is calculated 
from the page's current cash value and the previous historical cash value. The 
historical cash value is estimated via interpolation to come up with an 
"expected" historical cash value, that is close to what you'd get if every page 
was re-fetched and processed at the same, regular interval. Details are below.
  
  === Details of cash distribution ===
  
  While distributing the cash of a page to the outlinks, there are a few 
details that need to be handled:
  
-  * Some amount of the cash goes to the root page, while the rest of the cash 
goes to the real outlinks. If a page is a leaf (no outlinks) then all of the 
cash goes to the root page. The ratio of real/root can be adjusted to put 
greater emphasis on new pages versus recrawling, but the OPIC paper is a bit 
fuzzy about how to do this properly.
+  1. Some amount of the cash goes to the root page, while the rest of the cash 
goes to the real outlinks. If a page is a leaf (no outlinks) then all of the 
cash goes to the root page. The ratio of real/root can be adjusted to put 
greater emphasis on new pages versus recrawling, but the OPIC paper is a bit 
fuzzy about how to do this properly.
-  * Self-referencial links should (I think) be ignored. But that's another 
detail to confirm.
+  1. Self-referential links should (I think) be ignored. But that's another 
detail to confirm.
-  * There's a mention in the paper to adjusting the amount of cash given to 
internal (same domain) links versus external links, but no real details. This 
would be similar to the current Nutch support for providing a different initial 
score for internal vs. external pages, and the "ignore internal links" flag.
+  1. There's a mention in the paper to adjusting the amount of cash given to 
internal (same domain) links versus external links, but no real details. This 
would be similar to the current Nutch support for providing a different initial 
score for internal vs. external pages, and the "ignore internal links" flag.
-  * I'm not sure how best to efficiently implement the root page such that it 
efficiently gets cash from every single page that's processed. If you treat it 
as a special URL, then would that slow down the update to the crawldb?
+  1. I'm not sure how best to efficiently implement the root page such that it 
efficiently gets cash from every single page that's processed. If you treat it 
as a special URL, then would that slow down the update to the crawldb?
-  * The OPIC paper talks about giving some of the root page cash to pages to 
adjust the crawl priorities. Unfortunately not much detail was provided. The 
three approaches mentioned were:
+  1. The OPIC paper talks about giving some of the root page cash to pages to 
adjust the crawl priorities. Unfortunately not much detail was provided. The 
three approaches mentioned were:
    a. Give cash to unfetched pages, to encourage broadening the crawl.
    a. Give cash to fetched pages, to encourage recrawling.
    a. Give cash to specific pages in a target area (e.g. by domain), for 
focused crawling.

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Nutch-cvs mailing list
Nutch-cvs@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nutch-cvs

Reply via email to