Hi Amit,
>"We have been trying to setup an instance of dbpedia to continously
extract data from wikipedia dumps/updates. While"
We would like to do the same for the DBpedia Portuguese. If you can share
any code, it would be much appreciated.
Cheers
Pablo
On Mar 19, 2012 10:38 AM, "Amit Kumar" <[email protected]> wrote:
> Hi,
> We have been trying to setup an instance of dbpedia to continously extract
> data from wikipedia dumps/updates. While going through the output we
> observed that the image extractor was only picking up the first image for
> any page.
>
> I can see commented out code present in the ImageExtractor which seems to
> pick all images. In place of that we have the code which returns on the
> first image it encounters. My questions are :
>
>
> 1. Does the commented out code actually works ? Does it really pick
> all the images on a particular page?
> 2. Why was the change made in the code ?
>
>
>
> Thanks and Regards
> Amit
>
>
>
> ------------------------------------------------------------------------------
> This SF email is sponsosred by:
> Try Windows Azure free for 90 days Click Here
> http://p.sf.net/sfu/sfd2d-msazure
> _______________________________________________
> Dbpedia-discussion mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion
>
>
------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Dbpedia-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion