Thank you again Lorenz, I have (partially) got your point and quite useful explanation here.
All my (required) data is from Dbpedia. My point is to get this data using Semantic web application (using Jena ) via SPARQL query. One way is to directly use Dbpedia endpoint (which can not be re-used and shared) I want to use the other way in which I access Dbpedia data using my ontology. I have read an article on stackoverflow which says "include the link of required Dbpedia resource in the OWL NamedIndividual tab like http:dbpedia/resource/name and query it inside Semantic Web application (via SPARQL) as you query local data of ontology". But you earlier mentioned that one need CONSTRUCT queries in this situation. I am not sure how and why COSTRUCT can be used be used in that case. My instructor recommend me to use Dbpedia knowledge using Ontologies. Regards <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> Virus-free. www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> On Sat, Nov 11, 2017 at 3:39 PM, Lorenz Buehmann < [email protected]> wrote: > > > On 11.11.2017 12:55, Sidra shah wrote: > > Hello Lorenz and thank you for your information. > > > > * All the data of this resource is still located in the DBpedia dataset > > > > If it is the case, then why we provide links to Dbpedia resource inside > > Protege editor? All I want to re-use the data/information of Dbpedia > > > > Its then better that we use rdfs:seeAlso and provide Dbpedia resource, > like > > www.myOntology.org/Oxford and then use > > rdfs:seeAlso http:dbpedia.org/resource/Oxford > I guess you're mixing up things here. Indeed it's fine to reuse resource > from the Web of Data. I mean, that's in general how Linked Data is > supposed to work. Ok, sometimes it'S also recommended to define your own > resources and relate those to external datasets via owl:sameAs, but the > result is more or less the same. The data is located at different places. > > But, and that's what you have to understand: whatever you're doing with > your local data, querying, inferencing, etc. - the tool/framework/API > you're using for that has to be able to retrieve the data from different > locations if the data is physically located at different locations. > Seems quite obvious or not? > > And now it's up to you: given that you're reusing DBpedia resources in > your ontology: > 1) how does the SPARQL query engine working on your local ontology know > where the data comes from? Again, it should be quite obvious that it's > up to you to do all the setup, which brings us to the concept of > federated query processing... > > > > Regards > > > > > > > > <https://www.avast.com/sig-email?utm_medium=email&utm_ > source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> > > Virus-free. > > www.avast.com > > <https://www.avast.com/sig-email?utm_medium=email&utm_ > source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link> > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > > > On Sat, Nov 11, 2017 at 1:25 PM, Lorenz Buehmann < > > [email protected]> wrote: > > > >>> The second point is if we enter individual in Protege (Create New OWL > >>> Individual) and then enter URI like "http:dbpedia.org/resource". > >> I understand. But what do you expect to be happened with this step?All > >> that you did is to create an OWL individual with the URI of the DBpedia > >> resource. All the data of this resource is still located in the DBpedia > >> dataset which is > >> a) available via RDF dumps or > >> b) the public DBpedia SPARQL endpoint > >> c) HTTP GET request according to the Linked Data priciple > >> > >> But the data is **not** in your local ontology and neither Protege nor > >> the built-in SPARQL plugin would have access to it. > >>> By better I mean better in general (performance, re-use).etc. Will it > be > >>> considered a "Dbpedia resource" if we just include its URI in Protege > >>> editor and then query it locally like we query traditional data in > >> Protege > >>> (Ontology). > >> How do you query it locally? Which API, which triple store, etc? > >> > >> In general, what is the use-case? > >> > >>> <https://www.avast.com/sig-email?utm_medium=email&utm_ > >> source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> > >>> Virus-free. > >>> www.avast.com > >>> <https://www.avast.com/sig-email?utm_medium=email&utm_ > >> source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link> > >>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > >>> > >>> On Sat, Nov 11, 2017 at 1:05 PM, Lorenz Buehmann < > >>> [email protected]> wrote: > >>> > >>>> 1. Define "better" > >>>> > >>>> 2. I don't understand what you mean by the second point ... what is an > >>>> "IRI editor"??? And then, how would that extract "some triples"? > >>>> > >>>> > >>>> As I don't know what you're asking about and to keep it short, the > >>>> common way to extract RDF triples from and RDF dataset is to use a > >>>> SPARQL CONSTRUCT query that matches those "some triples". > >>>> > >>>> > >>>> On 10.11.2017 17:11, Sidra shah wrote: > >>>>> Hello > >>>>> > >>>>> For instance, if we have to get some triples from Dbpedia, which one > is > >>>>> better way to get? > >>>>> > >>>>> (1) Directly use Dbpedia endpoint inside application? > >>>>> > >>>>> (2) Use Ontology and use IRI editor like > >> dbpedia.org/resource/SOMETHING? > >>>>> Thank you > >>>>> > >> > >
