Hi Hady,
On Fri, Apr 26, 2013 at 12:21 AM, Hady elsahar <[email protected]>wrote:
> Hello Dimitris
>
> i finished part of the application i had a plan considering some parts ,
> but i need it to be reviewed to decide either this level of details is
> enough or should i write more/less ,
> so should i wait and submit the proposal to the end , so should i attach
> the done part here in the mailing list ?
>
Good! This is up to you, you can either post it here in the list or submit
it directly in malange website. You can update your online application as
many times as you want until the deadline.
>
> beside of this ,i had some Questions :
>
> 1- WikiData has it's own defined
> properties<http://meta.wikimedia.org/wiki/Wikidata/Notes/Data_model_primer>is
> mapping WikiData properties to Dbpedia existing ones should be
> considered part of this task ?
>
Of course not! This will be handled by the community. You will have of
course to map a few properties for demo / testing / debugging
> 2- considering merging WikiData entities to DBpedia ones, do we have a
> linkage between DBpedia entities URIs and other useful info (wikipedia page
> ID , Revision ID ), that we can use in such task ?
>
Yes, Andrea already submitted a pull request for this [1] so, for every
Wikidata item (i.e. Qxxx) we extract links to all wikipedia articles and
dbpedia resources. We also have a script that can do a dummy merge for a
dump [2]
> 3- considering the level of details , the project includes some tasks that
> should involve research , what level of abstraction should i supply in the
> application ? should i offer solutions in the plans , or just broad lines
> of the problems and tasks to be faced ?
>
Locating the hard parts from now shows insight and you should definitely
include this in your application, if you can also suggest solutions then
even better ;)
You shouldn't include implementation details though, just the general
solution idea.
Best,
Dimitris
[1] https://github.com/dbpedia/extraction-framework/pull/35
[2]
https://github.com/dbpedia/extraction-framework/blob/master/scripts/src/main/scala/org/dbpedia/extraction/scripts/CanonicalizeUris.scala
> thanks
> regards
>
>
> On Mon, Apr 22, 2013 at 3:18 PM, Dimitris Kontokostas
> <[email protected]>wrote:
>
>> Welcome Hady,
>>
>> Hady indeed volunteered to work on this some months ago, we also gave him
>> some warm-up tasks on this. Then university exams & projects came that took
>> all his time and this was left half-started :)
>> The Wikidata Mappings idea is a top priority for us and is the main
>> reason we run GSoC this year. It is good that we have at least one person
>> interested in that
>>
>> The idea page was updated today, please read it carefully and ask
>> anything you don't understand completely.
>>
>> As we already said, your applications must make us feel that you know
>> what you apply for and this can be achieved either by asking more questions
>> or by working on warm-up tasks ;)
>>
>> Cheers,
>> Dimitris
>>
>>
>> On Sat, Apr 20, 2013 at 12:01 PM, Hady elsahar <[email protected]>wrote:
>>
>>> Hello All ,
>>>
>>> I'm Hady ,i'm a Master Student in the field of Informatics and
>>> Research Assistant at Nile University we are working now on Sentiment
>>> Analysis from Microblogs
>>> I've worked in more than one project regarding semantic web , the most
>>> significant one is Weetit.com
>>> <http://hadyelsahar.wordpress.com/2012/07/13/weet-it-alpha-version-demo-6/>[1][2]
>>> it's
>>> a semantic question answering engine based based on the Data from DBpedia
>>> and freebase ,
>>>
>>> the second project was a FourSquare to DBPedia
>>> Mapper<https://www.youtube.com/watch?v=gytHKDerGJ0> [3]
>>> which maps the extracted data from FourSquare into RDF that uses the same
>>> DBpedia ontology
>>>
>>> during those projects ive dealt a lot with DBpedia dumbs , extraction
>>> and installation on virtuoso servers ( the whole process with the bugs we
>>> faced) as well as i'm very aware with the DBpedia ontology and most common
>>> Types like in the ontology. this is beside the Semantic web Standards like
>>> RDF and Sparql
>>>
>>> After finishing my bachelor study , i was very interested to contribute
>>> to DBpedia as one of the very , so Sebastian and Dimirtirs Guided me to
>>> submit a proposal to GSoC. By checking the Ideas page i've found that i'm
>>> very familiar with some tasks objectives like "*Mapping service from
>>> Wikidata properties to DBpedia ontology*" & "*Design a better /
>>> interactive display page*"
>>>
>>> i did a simillar thing to the interactive display page idea in out
>>> project "weet-it" in which we specified the most common datatypes and
>>> customized the answers according to those datatypes and this was inspired
>>> from the display of FreeBase and Wolfram alpha so we decided then to have a
>>> similar customized interface for places or people or sports books as well
>>> as ranking the properties to display for each datatype
>>>
>>> However , i consider myself interested more in the First idea which is
>>> Mapping WikiData to DBpedia , i've consulted Dimitris about it alot
>>> before , and he has guided me considering the objectives and the challenge
>>> of the Change propagation from WikiData to Dbpedia , i've took a look on
>>> the kind of Data offered by WikiData and how to parse it and convert it to
>>> DBpedia , i've started to write some experimental code in order to get
>>> familiar with the extraction task from WikiData and i'm now in the phase of
>>> Parsing the JSON
>>>
>>> i consider myself to be in a good level in JAVA and Python and i started
>>> to learn Scala in order to contribute in open source projects like DBpedia
>>> and DBpediaSpotlight , i'll mention other things in details in the proposal
>>> when it's finished.
>>>
>>> if there's any recommendations considering the proposal or any other
>>> potential thing to mention in the proposal , would be much appreciated to
>>> mention.
>>>
>>> thanks
>>> regards
>>>
>>>
>>> Code Repos :
>>> 1- https://github.com/hadyelsahar/foursquare2RDF
>>> 2- https://github.com/sherifkandeel/weet-it_WCF
>>> 3- https://github.com/hadyelsahar/Weet-itWebsite
>>>
>>>
>>>
>>> -------------------------------------------------
>>> Hady El-Sahar
>>> Research Assistant
>>> Center of Informatics Sciences | Nile
>>> University<http://nileuniversity.edu.eg/>
>>>
>>> email : [email protected]
>>> Phone : +2-01220887311
>>> http://hadyelsahar.me/
>>>
>>> <http://www.linkedin.com/in/hadyelsahar>
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Precog is a next-generation analytics platform capable of advanced
>>> analytics on semi-structured data. The platform includes APIs for
>>> building
>>> apps and a phenomenal toolset for data science. Developers can use
>>> our toolset for easy data analysis & visualization. Get a free account!
>>> http://www2.precog.com/precogplatform/slashdotnewsletter
>>> _______________________________________________
>>> Dbpedia-gsoc mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/dbpedia-gsoc
>>>
>>>
>>
>>
>> --
>> Kontokostas Dimitris
>>
>
>
>
> --
> -------------------------------------------------
> Hady El-Sahar
> Research Assistant
> Center of Informatics Sciences | Nile
> University<http://nileuniversity.edu.eg/>
>
> email : [email protected]
> Phone : +2-01220887311
> http://hadyelsahar.me/
>
> <http://www.linkedin.com/in/hadyelsahar>
>
>
--
Kontokostas Dimitris
------------------------------------------------------------------------------
Try New Relic Now & We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service
that delivers powerful full stack analytics. Optimize and monitor your
browser, app, & servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
_______________________________________________
Dbpedia-gsoc mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dbpedia-gsoc