I hope so Lorenz, thank you for your time.

Regards

<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

On Mon, Nov 13, 2017 at 11:22 AM, Lorenz Buehmann <
[email protected]> wrote:

> It should be rather "simple" as long as you know which data you need in
> your application. Good luck.
>
>
> On 12.11.2017 18:17, Sidra shah wrote:
> > thank you Lorenz for your time and guidance. I am working on it and will
> > get back to you, if required.
> >
> > Regards
> >
> > On Sun, Nov 12, 2017 at 4:41 PM, Lorenz Buehmann <
> > [email protected]> wrote:
> >
> >>
> >> On 12.11.2017 11:16, Sidra shah wrote:
> >>> Hello Lorenz. thank you again.
> >>>
> >>> Yes I want to " load the DBpedia data into my ontology".
> >>>
> >>> Actually my ontology data and the data I need from Dbpedia are so
> related
> >>> that I want Dbpedia data to be imported/stored in my ontology.
> >>>
> >>> According to my instructor, use any way but the "extraction of Dbpedia
> >>> data" should be via ontology. It means do not use Dbpedia endpoint
> inside
> >>> your application.
> >> As I said:
> >> 1) use SPARQL CONSTRUCT to get the necessary data for your application
> >> form the DBpedia endpoint
> >> 2) add this data to your ontology
> >> 3) then do whatever you're doing in your application based on that local
> >> data.
> >>> Regards
> >>>
> >>> On Sat, Nov 11, 2017 at 7:17 PM, Lorenz Buehmann <
> >>> [email protected]> wrote:
> >>>
> >>>> On 11.11.2017 14:58, Sidra shah wrote:
> >>>>> Thank you again Lorenz,
> >>>>>
> >>>>> I have (partially) got your point and quite useful explanation here.
> >>>>>
> >>>>> All my (required) data is from Dbpedia.
> >>>>>
> >>>>> My point is to get this data using Semantic web application (using
> >> Jena )
> >>>>> via SPARQL query.
> >>>> I still don't understand what your "application" does...accessing the
> >>>> data means to either load the DBpedia data into your ontology in
> advance
> >>>> or to use federated SPARQL queries at application runtime.
> >>>>> One way is to directly use Dbpedia endpoint (which can not be re-used
> >> and
> >>>>> shared)
> >>>> Why not? The whole DBpedia dataset is open data and can be downloaded
> >>>> and used by everyone locally.
> >>>>> I want to use the other way in which I access Dbpedia data using my
> >>>>> ontology. I have read an article on stackoverflow which says
> >>>>> "include the link of required Dbpedia resource in the OWL
> >> NamedIndividual
> >>>>> tab like http:dbpedia/resource/name and query it inside Semantic Web
> >>>>> application (via SPARQL) as you query local data of ontology".
> >>>> If your ontology just contains DBpedia resources, then it's nothing
> more
> >>>> than a subset of the DBpedia dataset. What prevents from retrieving
> the
> >>>> data you really need in your application and then use that data in
> your
> >>>> Web application locally?
> >>>>
> >>>> In general, reusing IRI as OWL individual in your local ontology is
> >>>> similar to just "talking" about the same individual in your ontology.
> >>>> That doesn't necessary mean that you're having access to data about
> this
> >>>> individual provided by others like the people who extracted the
> DBpedia
> >>>> resource from Wikipedia infoboxes.
> >>>>> But you earlier mentioned that one need CONSTRUCT queries in this
> >>>>> situation. I am not sure how and why COSTRUCT can be used be used in
> >> that
> >>>>> case.
> >>>> In your initial question you said that you want to extract "some
> >>>> triples". You're the only person who know which triples, thus, you're
> >>>> the one that should be able to query for this data. And I mentioned
> >>>> SPARQL CONSTRUCT because this type of query returns a set of RDF
> triples
> >>>> compared to SPARQL SELECT queries which returns a resultset.
> >>>>> My instructor recommend me to use Dbpedia knowledge using Ontologies.
> >>>>>
> >>>>> Regards
> >>>>>
> >>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_
> >>>> source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
> >>>>> Virus-free.
> >>>>> www.avast.com
> >>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_
> >>>> source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> >>>>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> >>>>>
> >>>>> On Sat, Nov 11, 2017 at 3:39 PM, Lorenz Buehmann <
> >>>>> [email protected]> wrote:
> >>>>>
> >>>>>> On 11.11.2017 12:55, Sidra shah wrote:
> >>>>>>> Hello Lorenz and thank you for your information.
> >>>>>>>
> >>>>>>> * All the data of this resource is still located in the DBpedia
> >> dataset
> >>>>>>> If it is the case, then why we provide links to Dbpedia resource
> >> inside
> >>>>>>> Protege editor? All I want to re-use the data/information of
> Dbpedia
> >>>>>>>
> >>>>>>> Its then better that we use rdfs:seeAlso and provide Dbpedia
> >> resource,
> >>>>>> like
> >>>>>>> www.myOntology.org/Oxford  and then use
> >>>>>>> rdfs:seeAlso http:dbpedia.org/resource/Oxford
> >>>>>> I guess you're mixing up things here. Indeed it's fine to reuse
> >> resource
> >>>>>> from the Web of Data. I mean, that's in general how Linked Data is
> >>>>>> supposed to work. Ok, sometimes it'S also recommended to define your
> >> own
> >>>>>> resources and relate those to external datasets via owl:sameAs, but
> >> the
> >>>>>> result is more or less the same. The data is located at different
> >>>> places.
> >>>>>> But, and that's what you have to understand: whatever you're doing
> >> with
> >>>>>> your local data, querying, inferencing, etc. - the
> tool/framework/API
> >>>>>> you're using for that has to be able to retrieve the data from
> >> different
> >>>>>> locations if the data is physically located at different locations.
> >>>>>> Seems quite obvious or not?
> >>>>>>
> >>>>>> And now it's up to you: given that you're reusing DBpedia resources
> in
> >>>>>> your ontology:
> >>>>>> 1) how does the SPARQL query engine working on your local ontology
> >> know
> >>>>>> where the data comes from? Again, it should be quite obvious that
> it's
> >>>>>> up to you to do all the setup, which brings us to the concept of
> >>>>>> federated query processing...
> >>>>>>> Regards
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_
> >>>>>> source=link&utm_campaign=sig-email&utm_content=webmail&utm_
> term=icon>
> >>>>>>> Virus-free.
> >>>>>>> www.avast.com
> >>>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_
> >>>>>> source=link&utm_campaign=sig-email&utm_content=webmail&utm_
> term=link>
> >>>>>>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> >>>>>>>
> >>>>>>> On Sat, Nov 11, 2017 at 1:25 PM, Lorenz Buehmann <
> >>>>>>> [email protected]> wrote:
> >>>>>>>
> >>>>>>>>> The second point is if we enter individual in Protege (Create New
> >> OWL
> >>>>>>>>> Individual) and then enter URI like "http:dbpedia.org/resource".
> >>>>>>>> I understand. But what do you expect to be happened with this
> >> step?All
> >>>>>>>> that you did is to create an OWL individual with the URI of the
> >>>> DBpedia
> >>>>>>>> resource. All the data of this resource is still located in the
> >>>> DBpedia
> >>>>>>>> dataset which is
> >>>>>>>> a) available via RDF dumps or
> >>>>>>>> b) the public DBpedia SPARQL endpoint
> >>>>>>>> c) HTTP GET request according to the Linked Data priciple
> >>>>>>>>
> >>>>>>>> But the data is **not** in your local ontology and neither Protege
> >> nor
> >>>>>>>> the built-in SPARQL plugin would have access to it.
> >>>>>>>>> By better I mean better in general (performance, re-use).etc.
> Will
> >> it
> >>>>>> be
> >>>>>>>>> considered a "Dbpedia resource" if we just include its URI in
> >> Protege
> >>>>>>>>> editor and then query it locally like we query traditional data
> in
> >>>>>>>> Protege
> >>>>>>>>> (Ontology).
> >>>>>>>> How do you query it locally? Which API, which triple store, etc?
> >>>>>>>>
> >>>>>>>> In general, what is the use-case?
> >>>>>>>>
> >>>>>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_
> >>>>>>>> source=link&utm_campaign=sig-email&utm_content=webmail&utm_
> >> term=icon>
> >>>>>>>>> Virus-free.
> >>>>>>>>> www.avast.com
> >>>>>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_
> >>>>>>>> source=link&utm_campaign=sig-email&utm_content=webmail&utm_
> >> term=link>
> >>>>>>>>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> >>>>>>>>>
> >>>>>>>>> On Sat, Nov 11, 2017 at 1:05 PM, Lorenz Buehmann <
> >>>>>>>>> [email protected]> wrote:
> >>>>>>>>>
> >>>>>>>>>> 1. Define "better"
> >>>>>>>>>>
> >>>>>>>>>> 2. I don't understand what you mean by the second point ... what
> >> is
> >>>> an
> >>>>>>>>>> "IRI editor"??? And then, how would that extract "some triples"?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> As I don't know what you're asking about and to keep it short,
> the
> >>>>>>>>>> common way to extract RDF triples from and RDF dataset is to
> use a
> >>>>>>>>>> SPARQL CONSTRUCT query that matches those "some triples".
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On 10.11.2017 17:11, Sidra shah wrote:
> >>>>>>>>>>> Hello
> >>>>>>>>>>>
> >>>>>>>>>>> For instance, if we have to get some triples from Dbpedia,
> which
> >>>> one
> >>>>>> is
> >>>>>>>>>>> better way to get?
> >>>>>>>>>>>
> >>>>>>>>>>> (1) Directly use Dbpedia endpoint inside application?
> >>>>>>>>>>>
> >>>>>>>>>>> (2) Use Ontology and use IRI editor like
> >>>>>>>> dbpedia.org/resource/SOMETHING?
> >>>>>>>>>>> Thank you
> >>>>>>>>>>>
> >>
>
>

Reply via email to