Hi all,
I have a question (which might sound stupid, I know) regarding the
performances of the extraction framework when processing wikipedias.
Whenever I run any of the extractors on any wikipedia, I am noticing that
the time to process each single wikipedia page decreases as the extraction
goes on (as per the stats produced by the framework).
What is the reason for this?
Also, I am wondering whether there is anything that can be done to speed up
the extraction process, apart from boosting the hardware.
Any ideas?
Cheers
Andrea
------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
Dbpedia-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dbpedia-developers