[Dbpedia-discussion] 2nd Call for Papers: Knowledge Discovery and Data Mining Meets Linked Open Data (Know@LOD 2015)

2015-03-02 Thread Jens Lehmann
-
Fourth International Workshop on
Knowledge Discovery and Data Mining Meets Linked Open Data
(Know@LOD 2015)

Co-located with the 11th Extended Semantic Web Conference (ESWC 2015)
May 31 - June 4, Portoroz, Slovenia

http://knowalod2015.informatik.uni-mannheim.de
-

The fourth international workshop on Knowledge Discovery and Data Mining 
Meets Linked Open Data (Know@LOD) will be held at the 12th Extended 
Semantic Web Conference (ESWC).

Knowledge discovery and data mining (KDD) is a well-established field 
with a large community investigating methods for the discovery of 
patterns and regularities in large data sets, including relational 
databases and unstructured text. Research in this field has led to the 
development of practically relevant and scalable approaches such as 
association rule mining, subgroup discovery, graph mining, and 
clustering. At the same time, the Web of Data has grown to one of the 
largest publicly available collections of structured, cross-domain data 
sets. While the growing success of Linked Data and its use in 
applications, e.g., in the e-Government area, has provided numerous 
novel opportunities, its scale and heterogeneity is posing challenges to 
the field of knowledge discovery and data mining.

Contributions from the knowledge discovery field may help foster the 
future growth of Linked Open Data. Some recent works on statistical 
schema induction, mapping, and link mining have already shown that there 
is a fruitful intersection of both fields. With the proposed workshop, 
we want to investigate possible synergies between both the Linked Data 
community and the field of Knowledge Discovery, and to explore novel 
directions for mutual research. We wish to stimulate a discussion about 
how state-of-the-art algorithms for knowledge discovery and data mining 
could be adapted to fit the characteristics of Linked Data, such as its 
distributed nature, incompleteness (i.e., absence of negative examples), 
and identify concrete use cases and applications.

Submissions have to be formatted according to the Springer LNCS 
guidelines. We welcome both full papers (max 12 pages) as well as 
work-in-progress and position papers (max 6 pages). Accepted papers will 
be published online via CEUR-WS, with a selection of the best papers of 
each ESWC workshop appearing in an additional volume edited by Springer. 
Papers must be submitted online via Easychair at 
https://easychair.org/conferences/?conf=knowlod2015.

Topics of interest include data mining and knowledge discovery methods 
for generating and processing, or using linked data, such as
- Automatic link discovery
- Event detection and pattern discovery
- Frequent pattern analysis
- Graph mining
- Knowledge base debugging, cleaning and repair
- Large-scale information extraction
- Learning and refinement of ontologies
- Modeling provenance information
- Ontology matching and object reconciliation
- Scalable machine learning
- Statistical relational learning

Important Dates:

Submission deadline: March 16th, 2015
Notification: April 3rd, 2015
Camera ready version: April 17th, 2015
Workshop: May 31st or June 1st, 2015

Organization:

Jens Lehmann, University of Leipzig, Germany
Heiko Paulheim, University of Mannheim, Germany
Vojtěch Svátek, University of Economics, Prague, Czech Republic
Johanna Völker, University of Mannheim, Germany


-- 
Dr. Jens Lehmann
Head of AKSW group, University of Leipzig
Homepage: http://www.jens-lehmann.org
Group: http://aksw.org - semantic web research center
Project: http://geoknow.eu - geospatial data on the web

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] DL-Learner 1.0 (Supervised Structured Machine Learning Framework) Released

2015-02-13 Thread Jens Lehmann

Dear all,

the AKSW group [1] is happy to announce DL-Learner 1.0.

DL-Learner is a framework containing algorithms for supervised machine 
learning in RDF and OWL. DL-Learner can use various RDF and OWL 
serialization formats as well as SPARQL endpoints as input, can connect 
to most popular OWL reasoners and is easily and flexibly configurable. 
It extends concepts of Inductive Logic Programming and Relational 
Learning to the Semantic Web in order to allow powerful data analysis.

Website: http://dl-learner.org
GitHub page: https://github.com/AKSW/DL-Learner
Download: https://github.com/AKSW/DL-Learner/releases
ChangeLog: http://dl-learner.org/development/changelog/

DL-Learner is used for data analysis tasks within other tools such as 
ORE [2] and RDFUnit [3]. Technically, it uses refinement operator based, 
pattern based and evolutionary techniques for learning on structured 
data. For a practical example, see [4]. DL-Learner also offers a plugin 
for Protégé [5], which can give suggestions for axioms to add. 
DL-Learner is part of the Linked Data Stack [6] - a repository for 
Linked Data management tools.

We want to thank everyone who helped to create this release, in 
particular (alphabetically) An Tran, Chris Shellenbarger, Christoph 
Haase, Daniel Fleischhacker, Didier Cherix, Johanna Völker, Konrad 
Höffner, Robert Höhndorf, Sebastian Hellmann and Simon Bin. We also 
acknowledge support by the recently started SAKE project, in which 
DL-Learner will be applied to event analysis in manufacturing use cases, 
as well as the GeoKnow [7] and Big Data Europe [8] projects where it is 
part of the respective platforms.

View this announcement on Twitter and the AKSW blog:
   https://twitter.com/dllearner/status/566172443442958336
   http://blog.aksw.org/2015/dl-learner-1-0/

Kind regards,

Lorenz Bühmann, Jens Lehmann and Patrick Westphal

[1] http://aksw.org
[2] http://ore-tool.net
[3] http://aksw.org/Projects/RDFUnit.html
[4] http://dl-learner.org/community/carcinogenesis/
[5] https://github.com/AKSW/DL-Learner-Protege-Plugin
[6] http://stack.linkeddata.org
[7] http://geoknow.eu
[8] http://www.big-data-europe.eu

-- 
Dr. Jens Lehmann
Head of AKSW group, University of Leipzig
Homepage: http://www.jens-lehmann.org
Group: http://aksw.org - semantic web research center
Project: http://geoknow.eu - geospatial data on the web

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] CfP: Workshop on Geospatial Linked Data (GeoLD 2014) - Deadline: July 10th

2014-06-21 Thread Jens Lehmann
=
Call for Papers
1st International Workshop on Geospatial Linked Data (GeoLD 2014)
  in conjunction with the annual SEMANTiCS conference

1st September 2014, Leipzig, Germany

http://geold.geoknow.eu
=
__
OBJECTIVES
͞͞

'Geospatial technology, information, and services are addressing some of
the major priorities of our nations, adding value to productivity,
reducing costs and enabling GDP growth in the process.'
   Prof. Arup Dasgupta, in Geospatial World, May 2013

In recent years, Semantic Web technologies have strengthened their
position in the areas of data and knowledge management. Standards for
organizing and querying semantic information, such as RDF(S) and SPARQL
are adopted by large academic communities, while corporate vendors adopt
semantic technologies to organize, expose, exchange and retrieve their
datasets as Linked Data. Moreover, a large number of currently available
datasets (both RDF and conventional) contain geospatial information,
which is of high importance in several application scenarios, e.g.,
navigation, tourism, or social media. Examples include DBpedia,
Geonames, OSM and its RDF counterpart, LinkedGeoData. RDF stores have
become robust and scalable enough to support volumes of billions of
records (RDF triples) but traditional geospatial data management systems
still significantly outperform them in efficiency and scalability. On
the other hand, GIS systems can benefit from Linked Data principles
(e.g. schema agility, interoperability). Recently, GeoSPARQL has emerged
as a promising standard from OGC for geospatial RDF that targets
standardized geospatial RDF data modeling and querying. A great number
of tools and libraries have been developed that allow for handling
(storing, querying, visualizing, etc.) Linked Data, however only a few
approaches started to focus on geospatial RDF data management.
Integrating Semantic Web with geospatial data management requires the
scientific community to address the two following challenges. First, the
definition of proper standards, vocabularies and methodologies for
representing, transforming and mapping geospatial information according
to RDF(S) and SPARQL protocols that also conform to the principles of
established geospatial standards. Second, the development of
technologies for efficient storage, robust indexing, processing,
reasoning, querying and visualization of semantically organized
geospatial data.

__
TOPICS OF INTEREST
͞͞

The 1st International Workshop on Geospatial Linked Data welcomes the
submission of original and previously unpublished research papers in the
field of geospatial Linked Data management. Papers may deal with
methods, models, algorithms, case studies, practical experiences and
applications and also work in progress solutions. Topics of interest
include, but are not limited to:

Interoperability and Integration
͞
* Geospatial Linked Data and standards (GeoSPARQL, INSPIRE, W3C, OGC)
* Extraction/transformation of geospatial Linked Data from conventional
sources
* Integration (schema mapping, interlinking, fusion) techniques for
geospatial RDF data
* Enrichment of Linked Data with geospatial information
* Quality, provenance and evolution of geospatial Linked Data

Big Data Management
͞͞͞
* Distributed solutions for geospatial Linked Data management (storing,
querying, mapping, etc.)
* Algorithms and tools for large scale, scalable geospatial Linked Data
management
* Efficient indexing and querying of geospatial Linked Data
* Geospatial-specific reasoning on RDF data
* Ranking techniques on querying geospatial RDF data
* Advanced querying capabilities on geospatial RDF data

Utilization of Geospatial Linked Data
͞
* Geospatial Linked Data in social web platforms and applications
* Visualization models and interfaces for browsing, authoring and
querying geospatial Linked Data
* Real world applications/use cases/paradigms using (exposing,
utilizing) geospatial Linked Data
* Evaluation/comparison of tools/libraries/frameworks for geospatial
Linked Data management

GeoLD will provide the opportunity for the community of Linked Data to
focus on the emerging need for  effective and efficient production,
management and utilization of geospatial information within Linked Data.
Emphasis will be given on works describing novel methodologies,
algorithms and tools that advance the current state of the art with
respect to efficiency or effectiveness. We welcome both mature
solutions, as well as ongoing works that present, though, promising results.

__
SUBMISSION AND PUBLICATION
͞͞

For providing a forum for sharing novel ideas, GeoLD welcomes a broad
spectrum of contributions, including: Full research papers, 

[Dbpedia-discussion] CfP: Workshop on Geospatial Linked Data (GeoLD 2014)

2014-05-15 Thread Jens Lehmann
=
Call for Papers
1st International Workshop on Geospatial Linked Data (GeoLD 2014)
   in conjunction with the annual SEMANTiCS conference

1st September 2014, Leipzig, Germany

http://geold.geoknow.eu
=
__
OBJECTIVES
͞͞

'Geospatial technology, information, and services are addressing some of 
the major priorities of our nations, adding value to productivity, 
reducing costs and enabling GDP growth in the process.'
Prof. Arup Dasgupta, in Geospatial World, May 2013

In recent years, Semantic Web technologies have strengthened their 
position in the areas of data and knowledge management. Standards for 
organizing and querying semantic information, such as RDF(S) and SPARQL 
are adopted by large academic communities, while corporate vendors adopt 
semantic technologies to organize, expose, exchange and retrieve their 
datasets as Linked Data. Moreover, a large number of currently available 
datasets (both RDF and conventional) contain geospatial information, 
which is of high importance in several application scenarios, e.g., 
navigation, tourism, or social media. Examples include DBpedia, 
Geonames, OSM and its RDF counterpart, LinkedGeoData. RDF stores have 
become robust and scalable enough to support volumes of billions of 
records (RDF triples) but traditional geospatial data management systems 
still significantly outperform them in efficiency and scalability. On 
the other hand, GIS systems can benefit from Linked Data principles 
(e.g. schema agility, interoperability). Recently, GeoSPARQL has emerged 
as a promising standard from W3C for geospatial RDF that targets the 
standardized geospatial RDF data modeling and querying. A great number 
of tools and libraries have been developed that allow for handling 
(storing, querying, visualizing, etc.) Linked Data, however only a few 
approaches started to focus on geospatial RDF data management. 
Integrating Semantic Web with geospatial data management requires the 
scientific community to address the two following challenges. First, the 
definition of proper standards, vocabularies and methodologies for 
representing, transforming and mapping geospatial information according 
to RDF(S) and SPARQL protocols that also conform to the principles of 
established geospatial standards. Second, the development of 
technologies for efficient storage, robust indexing, processing, 
reasoning, querying and visualization of semantically organized 
geospatial data.

__
TOPICS OF INTEREST
͞͞

The 1st International Workshop on Geospatial Linked Data welcomes the 
submission of original and previously unpublished research papers in the 
field of geospatial Linked Data management. Papers may deal with 
methods, models, algorithms, case studies, practical experiences and 
applications and also work in progress solutions. Topics of interest 
include, but are not limited to:

Interoperability and Integration
͞
* Geospatial Linked Data and standards (GeoSPARQL, INSPIRE, W3C, OGC)
* Extraction/transformation of geospatial Linked Data from conventional 
sources
* Integration (schema mapping, interlinking, fusion) techniques for 
geospatial RDF data
* Enrichment of Linked Data with geospatial information
* Quality, provenance and evolution of geospatial Linked Data

Big Data Management
͞͞͞
* Distributed solutions for geospatial Linked Data management (storing, 
querying, mapping, etc.)
* Algorithms and tools for large scale, scalable geospatial Linked Data 
management
* Efficient indexing and querying of geospatial Linked Data
* Geospatial-specific reasoning on RDF data
* Ranking techniques on querying geospatial RDF data
* Advanced querying capabilities on geospatial RDF data

Utilization of Geospatial Linked Data
͞
* Geospatial Linked Data in social web platforms and applications
* Visualization models and interfaces for browsing, authoring and 
querying geospatial Linked Data
* Real world applications/use cases/paradigms using (exposing, 
utilizing) geospatial Linked Data
* Evaluation/comparison of tools/libraries/frameworks for geospatial 
Linked Data management

GeoLD will provide the opportunity for the community of Linked Data to 
focus on the emerging need for  effective and efficient production, 
management and utilization of geospatial information within Linked Data. 
Emphasis will be given on works describing novel methodologies, 
algorithms and tools that advance the current state of the art with 
respect to efficiency or effectiveness. We welcome both mature 
solutions, as well as ongoing works that present, though, promising results.

__
SUBMISSION AND PUBLICATION
͞͞

For providing a forum for sharing novel ideas, GeoLD welcomes a broad 
spectrum 

[Dbpedia-discussion] SEMANTICS 2014 - 2nd Call for Papers + Poster + Demos

2014-05-14 Thread Jens Lehmann
 with other researchers. Poster and demo submissions should
consist of a paper of 1-4 pages that describes the work and its
contribution to the field. Submissions to Posters  Demonstrations track
must be formatted in the Springer LNCS format
(http://www.springer.com/computer/lncs/lncs+authors), i.e. please do NOT
use the ACM template here.

Submissions will be reviewed by experienced researchers and
practitioners; each submission will receive detailed feedback.

Important Dates (Posters  Demo Papers)
* Submission Deadline:  July 17, 2014
* Notification of Acceptance:   July 31, 2014
* Camera Ready Paper:   Aug 01, 2014

Please submit at https://www.easychair.org/conferences/?conf=semantics2014.

Committee:
Sebastian Hellmann, Conference Chair
Christian Dirschl, Industry Chair
Andreas Blumauer, Industry Chair
Agata Filipowska, Scientific Chair
Harald Sack, Scientific Chair
Jens Lehmann, Scientific Chair

-- 
Dr. Jens Lehmann
AKSW Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] SEMANTICS 2014 - Call for Papers + Poster + Demos

2014-03-25 Thread Jens Lehmann
 with other researchers. Poster and demo submissions should
consist of a paper of 1-4 pages that describes the work and its
contribution to the field. Submissions to Posters  Demonstrations track
must be formatted in the Springer LNCS format
(http://www.springer.com/computer/lncs/lncs+authors), i.e. please do NOT
use the ACM template here.

Submissions will be reviewed by experienced researchers and
practitioners; each submission will receive detailed feedback.

Important Dates (Posters  Demo Papers)
* Submission Deadline:  July 17, 2014
* Notification of Acceptance:   July 31, 2014
* Camera Ready Paper:   Aug 01, 2014

Please submit at https://www.easychair.org/conferences/?conf=semantics2014.



-- 
Dr. Jens Lehmann
AKSW Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] New DBpedia Overview Article Available

2014-03-03 Thread Jens Lehmann

Dear all,

the new DBpedia overview article has been accepted at the Semantic Web 
Journal!

The updated final version is available here:
http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf

Kind regards,

Jens



Am 24.06.2013 18:03, schrieb Jens Lehmann:

 Dear all,

 we are pleased to announce that a new overview article for DBpedia is
 available: http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf

 The report covers several aspects of the DBpedia community project:

 * The DBpedia extraction framework.
 * The mappings wiki as the central structure for maintaining the
 community-curated DBpedia ontology.
 * Statistics on the multilingual support in DBpedia.
 * DBpedia live synchronisation with Wikipedia.
 * Statistics on the interlinking of DBpedia with other parts of the LOD
 cloud (incoming and outgoing links).
 * Several usage statistics: What kind of queries are asked against
 DBpedia and how did that change over the past years? How much traffic do
 the official static and live endpoint as well as the download server
 have? What are the most popular DBpedia datasets?
 * A description of use cases and applications of DBpedia in several
 areas (drop me mail if important applications are missing).
 * The relation of DBpedia to the YAGO, Freebase and WikiData projects.
 * Future challenges for the DBpedia project.

 After our ISWC 2009 paper on DBpedia, this is the (long overdue) new
 reference article for DBpedia, which should provide a good introduction
 to the project. We submitted the article as a system report to the
 Semantic Web journal, where it will be reviewed.

 Thanks a lot to all article contributors and to all DBpedia developers
 and users. Feel free to spread the information to interested groups and
 users.

 Kind regards,

 Jens



-- 
Dr. Jens Lehmann
Head of AKSW group, University of Leipzig
Homepage: http://www.jens-lehmann.org
Group: http://aksw.org - semantic web research center
Project: http://geoknow.eu - geospatial data on the web


--
Subversion Kills Productivity. Get off Subversion  Make the Move to Perforce.
With Perforce, you get hassle-free workflows. Merge that actually works. 
Faster operations. Version large binaries.  Built-in WAN optimization and the
freedom to use Git, Perforce or both. Make the move to Perforce.
http://pubads.g.doubleclick.net/gampad/clk?id=122218951iu=/4140/ostg.clktrk
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] 1st Call for Papers: Knowledge Discovery and Data Mining Meets Linked Open Data (Know@LOD 2014)

2014-02-13 Thread Jens Lehmann
1st Call for Papers: Knowledge Discovery and Data Mining Meets Linked
Open Data

-
Third International Workshop on
Knowledge Discovery and Data Mining Meets Linked Open Data
(Know@LOD 2014)

Co-located with the 11th Extended Semantic Web Conference (ESWC 2014)
May 25-29, Crete, Greece

http://knowalod2014.informatik.uni-mannheim.de
-

The third international workshop on Knowledge Discovery and Data Mining
Meets Linked Open Data (Know@LOD) will be held at the 11th Extended
Semantic Web Conference (ESWC).

Knowledge discovery and data mining (KDD) is a well-established field
with a large community investigating methods for the discovery of
patterns and regularities in large data sets, including relational
databases and unstructured text. Research in this field has led to the
development of practically relevant and scalable approaches such as
association rule mining, subgroup discovery, graph mining, and
clustering. At the same time, the Web of Data has grown to one of the
largest publicly available collections of structured, cross-domain data
sets. While the growing success of Linked Data and its use in
applications, e.g., in the e-Government area, has provided numerous
novel opportunities, its scale and heterogeneity is posing challenges to
the field of knowledge discovery and data mining.

Contributions from the knowledge discovery field may help foster the
future growth of Linked Open Data. Some recent works on statistical
schema induction, mapping, and link mining have already shown that there
is a fruitful intersection of both fields. With the proposed workshop,
we want to investigate possible synergies between both the Linked Data
community and the field of Knowledge Discovery, and to explore novel
directions for mutual research. We wish to stimulate a discussion about
how state-of-the-art algorithms for knowledge discovery and data mining
could be adapted to fit the characteristics of Linked Data, such as its
distributed nature, incompleteness (i.e., absence of negative examples),
and identify concrete use cases and applications.

Submissions have to be formatted according to the Springer LNCS
guidelines. We welcome both full papers (max 12 pages) as well as
work-in-progress and position papers (max 6 pages). Accepted papers will
be published online via CEUR-WS, with a selection of the best papers of
each ESWC workshop appearing in an additional volume edited by Springer.
Papers must be submitted online via Easychair at
https://www.easychair.org/conferences/?conf=knowlod2014

Topics of interest include data mining and knowledge discovery methods
for generating and processing, or using linked data, such as
- Automatic link discovery
- Event detection and pattern discovery
- Frequent pattern analysis
- Graph mining
- Knowledge base debugging, cleaning and repair
- Large-scale information extraction
- Learning and refinement of ontologies
- Modeling provenance information
- Ontology matching and object reconciliation
- Scalable machine learning
- Statistical relational learning
- Text and web mining
- Usage mining

Important Dates:

Submission deadline: March 6th, 2014
Notification: April 1st, 2014
Camera ready version: April 15th, 2014
Workshop: May 25th or 26th, 2014

Organization:

Johanna Völker, University of Mannheim, Germany
Jens Lehmann, University of Leipzig, Germany
Heiko Paulheim, University of Mannheim, Germany
Harald Sack, University of Potsdam, Germany
Voijtech Svatek, University of Economics, Prague, Czech Republic



--
Android apps run on BlackBerry 10
Introducing the new BlackBerry 10.2.1 Runtime for Android apps.
Now with support for Jelly Bean, Bluetooth, Mapview and more.
Get your Android app in front of a whole new audience.  Start now.
http://pubads.g.doubleclick.net/gampad/clk?id=124407151iu=/4140/ostg.clktrk
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] New DBpedia Overview Article Available

2013-06-24 Thread Jens Lehmann

Dear all,

we are pleased to announce that a new overview article for DBpedia is 
available: http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf

The report covers several aspects of the DBpedia community project:

* The DBpedia extraction framework.
* The mappings wiki as the central structure for maintaining the 
community-curated DBpedia ontology.
* Statistics on the multilingual support in DBpedia.
* DBpedia live synchronisation with Wikipedia.
* Statistics on the interlinking of DBpedia with other parts of the LOD 
cloud (incoming and outgoing links).
* Several usage statistics: What kind of queries are asked against 
DBpedia and how did that change over the past years? How much traffic do 
the official static and live endpoint as well as the download server 
have? What are the most popular DBpedia datasets?
* A description of use cases and applications of DBpedia in several 
areas (drop me mail if important applications are missing).
* The relation of DBpedia to the YAGO, Freebase and WikiData projects.
* Future challenges for the DBpedia project.

After our ISWC 2009 paper on DBpedia, this is the (long overdue) new 
reference article for DBpedia, which should provide a good introduction 
to the project. We submitted the article as a system report to the 
Semantic Web journal, where it will be reviewed.

Thanks a lot to all article contributors and to all DBpedia developers 
and users. Feel free to spread the information to interested groups and 
users.

Kind regards,

Jens

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] SOPA Blackout Vote

2012-01-18 Thread Jens Lehmann

Hello,

On 18.01.2012 00:17, Patrick van Kleef wrote:
 
 Attached is the page OpenLink is considering to use on the sites it
 controls including:
 
 * http://dbpedia.org
 * http://dbpedia-live.openlinksw.com
 * http://lod.openlinksw.com
 * http://pingthesemanticweb.com
 * http://uriburner.com
 
 to show its support of sites like Wikipedia, Reddit and many others that
 take a stand against this type of overly broad legislation.

DBpedia Live has joined the protest:

http://live.dbpedia.org/sparql
http://live.dbpedia.org/page/Stop_Online_Piracy_Act

Kind regards,

Jens

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Official DBpedia Live Release

2011-07-06 Thread Jens Lehmann

Hello Kingsley,

On 24.06.2011 18:08, Kingsley Idehen wrote:
 On 6/24/11 4:38 PM, Jens Lehmann wrote:
 
 Re. Linked Data do remember what's already in place (as part of the hot 
 staging of this whole thing) at: http://dbpedia-live.openlinksw.com/live .
 
 When that was constructed it included Linked Data deployment, naturally.
 
 Since Virtuoso is a common factor, its a VAD install to get a replica 
 via live.dbpedia.org .
 
 Anyway, I know this is early days on the live.dbpedia.org side of 
 things, and this is more about a SPARQL endpoint than entire Linked Data 
 deliverable. Anyway, when it comes to Linked Data and all the other 
 questions posed above, best to first look at what's already been done 
 (over a year now) re: http://dbpedia-live.openlinksw.com :-)

Of course we are aware of this, but were more focused on getting the
live extraction framework and endpoint working properly. We are, of
course, also very happy to have two working DBpedia live endpoints (with
the OpenLink one being even more powerful on hardware I guess).

We had quite some internal discussions and decided to use the VAD for
now. Mohamed installed it, so there is now Linked Data for DBpedia Live:
http://live.dbpedia.org/resource/Dresden
(Whether we will keep using this URL scheme in the future still needs to
be decided.)

Kind regards,

Jens

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Dbpedia growth trends

2011-06-29 Thread Jens Lehmann

Hello,

Am 29.06.2011 15:23, schrieb Luis Galárraga:
 Hi everybody:

 I am a master student at Saarland University (Germany) who is working
 with semantic databases (specifically efficient partitioning) and I was
 wondering if there is available information about the growth of the
 dbpedia datasets in order to somehow justify my work. The webpage says
 that there are 3.500.000 resources by Jan 2010 but it would be great if
 I can show the growth trend. I suspect there is a positive correlation
 with the growth of wikipedia articles but I think it would be better if
 I show directly the amount of semantic information.

We kept all previous releases at http://downloads.dbpedia.org. Analysing 
this would be the easiest way to show the growth of DBpedia. Please let 
us know your results.

Here is the size of the folders (which is not the most accurate measure 
because there are several reasons why the size can change apart from 
more extracted information):

1.8G./1.0
2.5G./2.0
7.6G./3.0rc
5.1G./3.0
6.0G./3.1
6.4G./3.2
7.3G./3.3
21G ./3.4
32G ./3.5
35G ./3.5.1
34G ./3.6

Kind regards,

Jens

-- 
Dr. Jens Lehmann
Head of AKSW/MOLE group, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] Official DBpedia Live Release

2011-06-24 Thread Jens Lehmann

Dear all,

the AKSW [1] group is pleased to announce the official release of
DBpedia Live [2]. The main objective of DBpedia is to extract structured
information from Wikipedia, convert it into RDF, and make it freely
available on the Web. In a nutshell, DBpedia is the Semantic Web mirror
of Wikipedia.

Wikipedia users constantly revise Wikipedia articles with updates
happening almost each second. Hence, data stored in the official DBpedia
endpoint can quickly become outdated, and Wikipedia articles need to be
re-extracted. DBpedia Live enables such a continuous synchronization
between DBpedia and Wikipedia.

The DBpedia Live framework has the following new features:

   1. Migration from the previous PHP framework to the new Java/Scala
  DBpedia framework.
   2. Support of clean abstract extraction.
   3. Automatic reprocessing of all pages affected by a schema mapping
  change at http://mappings.dbpedia.org.
   4. Automatic reprocessing of pages that are not changed for more
  than one month. The main objective of that feature is to that any
  change in the DBpedia framework, e.g. addition/change of an
  extractor, will eventually affect all extracted resources. It
  also serves as fallback for technical problems in Wikipedia or
  the update stream.
   5. Publication of all changesets.
   6. Provision of a tool to enable other DBpedia mirrors to be in
  synchronization with our DBpedia Live endpoint. The tool
  continuously downloads changesets and performs changes in a
  specified triple store accordingly.

Important Links:

* SPARQL-endpoint: http://live.dbpedia.org/sparql
* DBpedia-Live Statistics: http://live.dbpedia.org/livestats
* Changesets: http://live.dbpedia.org/liveupdates
* Sourcecode:
http://dbpedia.hg.sourceforge.net/hgweb/dbpedia/extraction_framework
* Synchronization Tool: http://sourceforge.net/projects/dbpintegrator/files/

Thanks a lot to Mohamed Morsey, who implemented this version of DBpedia
Live as well as to Sebastian Hellmann and Claus Stadler who worked on
its predecessor. We also thank our partners at the FU Berlin and
OpenLink as well as the LOD2 project [3] for their support.

Kind regards,

Jens

[1] http://aksw.org
[2] http://live.dbpedia.org
[3] http://lod2.eu

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Official DBpedia Live Release

2011-06-24 Thread Jens Lehmann

Hi Tom,

On 24.06.2011 14:45, Tom Heath wrote:
 Nice :)
 
 Quick question: on the basis of point 6, will resources in the main
 http://dbpedia.org/ namespace soon reflect the latest changes from
 DBpedia Live? 

Currently, no. (Of course they will reflect the latest changes every few
months in case of a release.)

 Presumably at this point the separate live.dbpedia
 SPARQL endpoint would become redundant?

No, the endpoints will run in parallel (for now at least). Some explanation:

There are two extraction modes of DBpedia:
* dump based (http://dumps.wikimedia.org/)
* live (via update stream)

The dump based extraction is performed in many (90) languages. It
generates all the files at http://downloads.dbpedia.org and some of them
are loaded in the official endpoint
(http://wiki.dbpedia.org/DatasetsLoaded) - mostly files extracted from
the English Wikipedia, but also labels and abstracts in different
languages. The live version of DBpedia currently works on the English
Wikipedia edition and does not generate dumps. Currently, we plan to run
both in parallel, so the live version does not supersede the static dump
based extraction.

Of course, anything we are doing is open for discussion and we welcome
suggestions.

I'll post other replies on the DBpedia mailing list to avoid too much
cross mailing list traffic.

Kind regards,

Jens

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Official DBpedia Live Release

2011-06-24 Thread Jens Lehmann

Hello,

On 24.06.2011 14:52, Thomas Steiner wrote:
 [Limiting CC list]
 
 Quick question: on the basis of point 6, will resources in the main
 http://dbpedia.org/ namespace soon reflect the latest changes from
 DBpedia Live?
 ...also interested in this question. On a related note, will there be
 http://live.dbpedia.org/{resource|page}/{Thing} pages as an
 intermediate solution?

That's a valid and good question, which is, however, not that easy to
answer. For now, we went the simple route and do not serve DBpedia Live
data as Linked Data, although I see that it would be desirable to have it.

If we serve it from http://live.dbpedia.org/{resource|page}/{Thing} that
implies changing the resource URIs accordingly (prefix
http://live.dbpedia.org/...). We could do that and add links to the
static URIs. A question would be whether it is desirable to have two
URIs for exactly the same thing from exactly the same source?

If we would decide to have different URIs for the static and live
version, then a related question is whether it is better to use
http://dbpedia.org/resource/... and http://live.dbpedia.org/resource/
- or -
http://dbpedia.org/resource/... and http://static.dbpedia.org/resource/
The latter requires more changes on our (OpenLink, FUB, AKSW) side, but
might be more plausible in the mid/long term.

Another option would be to use a single URI and a content negotiation
mechanism, which can deal with time
(http://events.linkeddata.org/ldow2011/papers/ldow2011-paper02-coppens.pdf),
which would however introduce additional complexity.

Input/opinions on those issues are welcome (if there is a best practice
for this case, please let us know).

Kind regards,

Jens

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] German version of 3.6 download files broken/truncated?

2011-05-20 Thread Jens Lehmann

Hello,

On 04.05.2011 18:53, Max Jakob wrote:
 These are the complete German datasets for DBpedia 3.6:
 
 http://dl.dropbox.com/u/940744/dbpedia/geo_coordinates_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/homepages_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/infobox_properties_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/infobox_property_definitions_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/instance_types_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/labels_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/mappingbased_properties_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/page_links_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/persondata_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/pnd_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/specific_mappingbased_properties_de.nt.bz2
 http://dl.dropbox.com/u/940744/dbpedia/wikipedia_links_de.nt.bz2
 
 I don't have access to the servers at the moment, so I put them in my
 Dropbox for now.
 Jens, could you please be so kind and make one last update on the
 DBpedia server? Thanks a lot.

The fixed files are now also available on the download server for future
reference:
http://downloads.dbpedia.org/3.6/de/fixed_truncated_files/

Kind regards,

Jens

-- 
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Query logs of DBpedia

2011-02-01 Thread Jens Lehmann

Hello,

Am 28.01.2011 18:28, schrieb Claudio Martella:
 Hello list,

 I'm a phd student and I'm currently working on my project in the field
 of IR. For the evaluation I'm using the DBpedia dataset and therefore
 I'd like to use a set of real-world queries issued to DBpedia to see how
 the system behaves.

 Is such a set publicly available? Would it be possible to have one?

An anonymous log excerpt for DBpedia 3.5.1 is available here:
ftp://download.openlinksw.com/support/dbpedia/

Kind regards,

Jens

-- 
Dr. Jens Lehmann
Head of AKSW/MOLE group, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Query problem

2010-08-06 Thread Jens Lehmann

Hello,

Am 04.08.2010 01:33, schrieb Abduladem Eljamel:
 Hi all,
 I am trying to execute a construct query to dbpedia.org sparql endpoint
 by using jena. the sparql query is quit long:

 I got this exception:
 HttpException: 500 SPARQL Request Failed.

 I think this exception is related to memory but I am not sure is it the
 memory of my local server or the server used by dbpedia.org sparql
 endpoint. also, when I reduce the query to few lines it works OK.

 Is there any thing can be done to increase the memory used?

Does the query work when you use the form at http://dbpedia.org/sparql? 
(You could also test via something like
wget -S -O- --header='Accept: application/sparql-results+xml' 
'http://dbpedia.org/sparql?query=YOUR QUERY'.)

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Linked Data

2010-08-02 Thread Jens Lehmann

Hello,

Am 23.07.2010 03:32, schrieb Ainie Zeinaida Idris:
 Hi,

 We just publish our data set, Cardiovascular. We would like this data
 set to be linked with Dbpedia

 http://kt.mimos.my/page/

 The sparql endpoint is at
 http://202.73.13.50:56001/sparql?query=SELECT+%3fs+%3fo+%7b+%3fs+a+%3fo+.+%7d+LIMIT+5id=cardio
 http://202.73.13.50:56001/sparql?query=SELECT+%3fs+%3fo+%7b+%3fs+a+%3fo+.+%7d+LIMIT+5id=cardio

You can use a tool like SILK:
http://www4.wiwiss.fu-berlin.de/bizer/silk/

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] problem with file wikipedia_links_ru.nt.bz2

2010-04-30 Thread Jens Lehmann

Hello Vladimir,

Vladimir Ivanov wrote:
 Dear all,
 
 I've got Unexpected end of archive when
 extracting wikipedia_links_ru.nt.bz2.
 
 Could someone fix original
 file on 3.51 DBPedia download page, please?

Thanks for reporting this. The file should be fixed now.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Links to Geonames

2010-04-20 Thread Jens Lehmann

Hello,

Carlo Brooks wrote:
 I see.  Is there a way to access the svn directly as opposed to 
 downloading http://downloads.dbpedia.org/3.5/links/geonames_links.nt.bz2 ?
 
 I am not certain there is a better file, but if there is I would just 
 like to know where I can access it...

SVN can be accessed as follows:
http://sourceforge.net/scm/?type=svngroup_id=190976

However, the SVN contains the extraction framework (and not the data 
sets generated by it), so you won't find another Geonames link file there.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Links to Geonames

2010-04-20 Thread Jens Lehmann

Hello,

Tom Morris wrote:
 On Tue, Apr 20, 2010 at 1:45 AM, Jens Lehmann
 lehm...@informatik.uni-leipzig.de wrote:
 
 For some link datasets in DBpedia, there is no proper update mechanism
 included in the DBpedia SVN repository. In such cases, the link data
 sets are copied from the previous release. For Geonames, this means that
 the links you see were not recently updated (and can be as old as one or
 two years).
 
 Is there a list someplace of who is responsible for each of these link
 sets and when they were last updated?

If you go to the download page and click on a data set, you get some 
information (or scroll to the bottom of the page):
http://wiki.dbpedia.org/Downloads35

To see whether data sets have changed compared to previous releases, you 
can go to http://downloads.dbpedia.org/ and compare different releases.

Please note that within the last year the extraction framework was 
rewritten and the live extraction was implemented. It's difficult to 
improve all aspects of DBpedia within a short timeframe and most 
interlinking data sets were never designed for long term maintenance, 
but rather one time efforts. (Anyone is invited to contribute mapping 
code to DBpedia, of course, to improve the situation.)

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] DBpedia Live Extraction - Infobox Annotations

2009-11-15 Thread Jens Lehmann

Hello,

Jens Lehmann schrieb:
 Hi all,
 
 while the new DBpedia live extraction framework is in place, there is
 now a discussion regarding additional annotations made in doc subpages
 of Wikipedia infoboxes. 

The discussion now takes place at:
http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28technical%29#Infobox_Template_Coherence_Proposal

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Q: dbprop:name attribute meaning

2009-11-11 Thread Jens Lehmann

Hello,

Mitko Iliev wrote:
 Hi All,
 
 When looking at http://dbpedia.org/page/Paul_Hackett_%28American_football%29 
 , we can notice the dbprop:name property with both literals and object 
 references.
 We can guess from context  it is relation but on the other hand name is 
 telling us other meaning .
 I'm wonder what is supposed to express this property, why it is both: literal 
 and object reference? 

Looking at the source of
http://en.wikipedia.org/w/index.php?title=Paul_Hackett_%28American_football%29oldid=319785362,
I see several templates containing the name attribute:

{{CFB Yearly Record Subhead
 | name  = [[Pittsburgh Panthers football|Pittsburgh Panthers]]
 | conf  = Division I-A Independent
 | startyear = 1989
 | endyear   = 1990
}}
{{CFB Yearly Record Entry
 | championship =
 | year = 1989
 | name = Pittsburgh

For each infobox containing the name attribute (some of them Wiki links
and some not), a name property is extracted. The infobox coherence
proposal we are currently discussing in Wikipedia (see my previous mail
on the list) can solve those problems (in that case another problem is
that name does not stand for the name of the person, but rather for a
team in which the person played). It is not clear yet whether and when
the issue will be fixed.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] DBpedia Live Extraction - Infobox Annotations

2009-11-09 Thread Jens Lehmann

Hi all,

while the new DBpedia live extraction framework is in place, there is
now a discussion regarding additional annotations made in doc subpages
of Wikipedia infoboxes. We introduced these annotations to enable
Wikipedians to edit the DBpedia ontology. The discussion can be found at
[1] and a technical description of our approach in [2]. Please have a
look at our approach and take part in the discussion to help us
providing an appropriate solution and achieving consensus.

Kind regards,

Jens

[1]http://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#DBpedia_Template_Annotations
[2]http://jens-lehmann.org/files/2009_dbpedia_live_extraction.pdf

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Issues getting extraction framework to run on Windows

2009-08-16 Thread Jens Lehmann

Hello,

Alex schrieb:
 HI Jens,
 
 No more error messages after the update, so we’ve made some progress at
 least. Thanks for the speedy fix.
 
 However, running extract_test.php does not seem to actually be
 outputting the RDF triples. The following is what I get running the
 script from command prompt.
 
 C:\Users\Alex\Documents\DBpedia\extractionphp extract_test.php
 
 http://dbpedia.org/resource/London
 http://www.w3.org/2000/01/rdf-schema#label
 
 London@en .
 
 The output takes about two seconds to appear, followed by the program
 immediately terminating. As I understand, extract_test.php should be
 using SimpleDumpDestination and thus printing directly to stdout.

Doesn't it print to stdout? Reading your message, it appears that one
triple (in N-Triples format) was extracted and print to stdout.

extract_test.php downloads the page specified in extract.php from
Wikipedia (which explains the delay). In this case, it is the article
about London. It then runs the extractor specified in extract_test.php
on this article (by default SampleExtractor). The result is printed to
stdout.

As mentioned previously, extract_full.php should be used for producing a
complete DBpedia release (but you need to use import.php to download the
corresponding Wikipedia dumps before and import them in MySQL databases).

Kind regards,

Jens


-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Issues getting extraction framework to run on Windows

2009-08-15 Thread Jens Lehmann

Hello,

Alex schrieb:
 Hello,
 
 PHP Fatal error:  Class 'ValidateExtractionResult' not found in
 C:\Users\Alex\Do
 
 cuments\DBpedia\extraction\extractors\ExtractorContainer.php on line 32
 
 From my previous correspondence with Jens Lehmann, I believe a full
 installation of PHP 5.x (5.3 in my case) and the entire source for the
 extraction framework ought to work “out of the box”. 

We just tried extract_test.php on Windows. There was a small error (?
instead of ?php in one file), which may cause problems with some
configurations. Can you run svn update and try again?

start.php is deprecated. You can use extract_dataset.php to extract one
data set or extract_full.php to run the complete extraction.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Help tracking changes from 2.0 to 3.2

2009-07-30 Thread Jens Lehmann

Hello,

Chris Welty schrieb:
 PREFIX dbr: http://dbpedia.org/resource/
 PREFIX yago: http://dbpedia.org/class/yago/
 
 in dbp 2.0 we have this triple:
 
 dbr:Noggin_%28protein%29 a yago:Protein114728724.
 
 But in dbp 3.2 (and 3.3) this triple is missing.  Leading me to ask if there 
 are 
 more specific details about the changes along the way (from 2.0 to 3.2).  
 We've 
 encountered a few other examples of the same thing (type triples present in 
 dbp 
 2.0 and missing in 3.2), and wonder if this is a few isolated incidents or 
 the 
 product of something systematic.

This triple is created by the YAGO project [1]. More specifically, those 
YAGO type triples are created by running YAGO through the DBpedia 
converter (we jointly created). There has been a major change in YAGO in 
DBpedia 3.1 (see changelog [2]).

Kind regards,

Jens

[1] http://www.mpi-inf.mpg.de/yago-naga/yago/
[2] http://wiki.dbpedia.org/ChangeLog

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] From instance to class

2009-07-17 Thread Jens Lehmann

Hello,

Piero Molino schrieb:
 Hi Jens,
 
[...]
 
 This remembers me of leacock chodorow measure used in my research lab 
 for calculating semantic distance in wordnet. The fact that there's a 
 class wich is a kind of root is a good thing for this. There's something 
 alse like that i should know? Or can you even suggest me something like 
 a tool for visualizing the ontology and became aware of his 
 characteristics? (in a university course we used protege for building 
 example ontologies, could it be useful?)

Yes, Protégé can be useful. If you open the DBpedia ontology in Protege, 
go to the OWLViz tab, then select Options = Radius 5, you get an 
overview of all classes.

 Google comes up with a
 few papers with more sophisticated approaches related to measuring
 distance in ontologies [3,4,5], which might be helpful.
 
 It's really funny that the second paper you're suggesting me has been 
 done by researchers in the same laboratory of the same university i'm 
 actually working in :) so i thank you for your suggestion and i will 
 probably go ask them some suggestion about distance metrics.

If you meet Francesca Lisi or Nicola Fanizzi, send them my regards. :-)

 Is there some kind of limitation i'm not aware of that can  
 stop me doing what i described?

 In your description, you assume that there is one class for each object.
 In general, an object can be instance of several classes. In particular,
 it can also belong to several most specific classes. However, this
 does seem to be rare in the DBpedia ontology (and you can generalise the
 above description to this case).
 
 Ok i get it. Now for example let's take:
 
 http://dbpedia.org/page/Bari
 
 (my home town). the rdf:type property (wich i'm assuming is the one 
 useful for the maping) gives back:
 
 rdf:type http://www.w3.org/1999/02/22-rdf-syntax-ns#type
 
 * dbpedia-owl:Place http://dbpedia.org/ontology/Place
 * dbpedia-owl:Area http://dbpedia.org/ontology/Area
 * dbpedia-owl:Resource http://dbpedia.org/ontology/Resource
 * dbpedia-owl:PopulatedPlace
   http://dbpedia.org/ontology/PopulatedPlace
 * http://dbpedia.org/class/yago/AncientGreekCities
 * http://dbpedia.org/class/yago/CitiesAndTownsInApulia
 * http://dbpedia.org/class/yago/CoastalCitiesAndTownsInItaly
 * http://dbpedia.org/class/yago/PortCitiesInItaly
 
 
 Googling yago i've found it's an ontology based on wordnet structer 
 (more or less). By the way as you told me the classes are one more 
 specific than another. Is there a way to determinate how deep a class 
 is other than calculating a path to owl:Thing ? I'm asking this because 
 right now i'm thinking of mapping an instance to one class, maybe the 
 most specific one, by te way i may find come other ways like map to 
 every class and than take the deepest... i don't know i will have to 
 think a bit more about this :)

DBpedia has different class hierarchies (DBpedia ontology, YAGO, 
OpenCyc, Umbel), which you should not mix in your approach. See Section 
3.2 in our latest DBpedia paper [2] for an overview. The DBpedia 
ontology has the prefix http://dbpedia.org/ontology/.

Since we currently store all types of an entity (Place, Area, 
PopulatedPlace) for an entity and not just the most specific one 
(PopulatedPlace), you could also calculate the depth by just counting 
the number of classes. This works if there is a single most specific 
class and we keep storing all more general classes in the SPARQL 
endpoint (which might change in the future).

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] SKOS, Eponymous Categories, and Main Articles

2009-07-17 Thread Jens Lehmann

Hello,

Matt Mullins schrieb:
 Hi,
 
[...]
 
 I feel like this would be a valuable addition to the category 
 information already available, but I don't want to pretend to know how 
 to go about extracting this information or how to denote it (some SKOS 
 predicate maybe?).  Does anyone else think this would be useful 
 information?  Anyone super familiar with the current extraction 
 framework and would know how doable this is or why it hasn't been done 
 before?  I know I won't be able to work it into my project 
 (deadlines...)  but this project has exposed me to so many new ideas and 
 technologies that I now feel invested in them all.

It hasn't been done, because we were not really aware of it. Linking 
category and main article could be useful and is doable within the 
extraction framework (by adding a new extractor which searches for 
certain patterns on category pages). However, if I understand you 
correctly, you will not have time to do this yourself.(?) If this is the 
case, then you can add it as a feature request to our tracker: 
http://sourceforge.net/tracker/?group_id=190976atid=935523
However, at the moment we have some urgent items on our ToDo list, so we 
cannot make any promise on whether and when we implement it.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Why is the OWL ontology in RDF/XML?

2009-07-15 Thread Jens Lehmann

Hello,

Paul Houle schrieb:
 
 Any chance we could get the OWL ontology in NT as well?

It can be converted of course:
http://downloads.dbpedia.org/3.2/en/dbpedia-ontology.nt

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Why is the OWL ontology in RDF/XML?

2009-07-15 Thread Jens Lehmann

Hello,

Paul Houle schrieb:
 Jens Lehmann wrote:
 I ran it through a converter last night and got a document that,  
 like yours,  contained blank nodes.  These are implicit in the RDF-XML,  
 but need to be named in order to be serialized as NT.  That's one 
 substantial difference between the OWL ontology and the rest of dbpedia.

That's true. If you do not like the blank nodes, you could perform a 
string replace, e.g. _:genid replaced by http://dbpedia.org/genid/;. 
In general, an OWL axiom may require several RDF triples to represent 
it. Usually, this is done by introducing blank nodes.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] From instance to class

2009-07-15 Thread Jens Lehmann

Hello Piero,

Piero Molino schrieb:
 Hello everyone,
 
 Now, i don't know where to start from for doing this (my software is  
 in java). Wich dump should i use? Anyone knows reliable opensource  
 libraries for managing owl (i never used it before so i'm really new  
 to it)? 

There are several ways to achieve this depending on how exactly you 
measure distance between classes. One way would be to use SPARQL and 
query the official endpoint, e.g. using Jena [1]. The second way would 
be to use the OWL API [2]. What to do specifically, depends on your 
distance metric. For instance, you could ask yourself whether two 
classes A1 and A2 are similar in your scenario if A1 is a super class of 
A2.(?) A simple way would be to query parent classes of A1 until a class 
A' is found, which is also parent of A2. You then get a path from A1 to 
A2 with A' as middle element and can measure its length. Due to the 
existence of owl:Thing such a path always exists. Google comes up with a 
few papers with more sophisticated approaches related to measuring 
distance in ontologies [3,4,5], which might be helpful.

 Is there some kind of limitation i'm not aware of that can  
 stop me doing what i described?

In your description, you assume that there is one class for each object. 
In general, an object can be instance of several classes. In particular, 
it can also belong to several most specific classes. However, this 
does seem to be rare in the DBpedia ontology (and you can generalise the 
above description to this case).

Kind regards,

Jens

[1]http://jena.sourceforge.net/
[2]http://owlapi.sourceforge.net/
[3]http://www.aaai.org/Papers/Workshops/2005/WS-05-01/WS05-01-015.pdf
[4]http://www.di.uniba.it/~cdamato/kes2008-AKS_Track.pdf
[5]http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=arnumber=245isnumber=190

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] new dbpedia

2009-07-13 Thread Jens Lehmann

Hello Paul,

Paul Houle schrieb:
 Jürgen Jakobitsch wrote:
 predication : in a couple of years, everything will be rdf - but 
 vocabularies will only be understood
   in limited geographical areas - gone be the vision of global 
 communication.

 http://dbpedia.org/page/New_York_City - dbpprop:latd (xsd:integer)
 http://dbpedia.org/page/Paris - dbpprop:latLong - 
 dbpedia:Paris/latLong/coord - dbpprop:coordProperty (some xsd:integer - not 
 interpretable)
 http://dbpedia.org/page/Berlin - dbpprop:latD - (xsd:double)
 http://dbpedia.org/page/Oslo - dbpprop:latDeg - (xsd:integer)
 http://dbpedia.org/page/Babylon - geo:lat - (xsd:float)

   
 This is just the beginning of problems that you face if you try to 
 do serious geospatial reasoning with dbpedia data (or even try to draw 
 maps.)

We will try to improve this situation in the future (i.e. the live 
version of DBpedia). geo:lat and geo:long coordinates should be 
preferred. The dbpprop properties are extracted from Wikipedia 
infoboxes, which are not mapped to the DBpedia ontology. We will allow 
adding such mappings (hopefully) soon.

 Imagine the meaning of a point coordinate for new york city,  as 
 compared to a point coordinate for the statue of liberty.  The statue of 
 liberty fills a footprint on the ground which is about 10 m in radius.  
 It's reasonable to pretend that it's a point if you're drawing a map of 
 NYC.  NYC represents a ground footprint that is more like 10 km in 
 radius.  At best,  you can represent it with a centroid or a point 
 that's particularly significant (Google maps,  for instance,  locates 
 New York City at the 42nd and 7th intersection by the Port Authority Bus 
 Terminal;)  the point for NYC is pretty much meaningless if you're 
 drawing a map of the city,  but it would be useful if you were drawing a 
 map of the Northeastern US.

The OpenStreetMap project [1] faces the same problems and solves them 
using ways for large objects and nodes for small objects. You might 
also be interested in our new LinkedGeoData effort [2].

Kind regards,

Jens

[1] http://www.openstreetmap.org
[2] http://linkedgeodata.org

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Querying DbPedia to get country datas

2009-06-04 Thread Jens Lehmann

Hello,

Petite Escalope schrieb:
 Hello,
 I need to make the profile of all countries of the world (area, 
 population, currency, etc...)
 
 So I would like to have informations countained in infosboxes of all 
 wikipedia country pages. (look at this exemple: 
 http://en.wikipedia.org/wiki/Benin)
 
 I'm sure DBPedia countains this informations but I can't understand how 
 I can  get them!
 
 Could you help me with an exemple or something i can use?

How familiar are you with Semantic Web technologies? You can get this 
information by performing a SPARQL query at http://dbpedia.org/sparql. I 
would build the query by selecting instances of dbpedia-owl:Country 
(http://dbpedia.org/ontology/Country) and then pick the properties you want:

select * where {
?country a dbpedia-owl:Country .
?country dbpedia-owl:areaMetro ?area }

The difficult part is to make the query as complete as possible, in 
particular if the property you are looking for is not in the DBpedia 
ontology (i.e. its URI is starting with http://dbpedia.org/ontology/). 
In that case you need to query for several properties and use OPTIONAL 
patterns in your query.

Kind regards,

Jens


--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] shortabstract_en.nt: character encoding?

2009-05-25 Thread Jens Lehmann

Hallo,

Sven Hartrumpf schrieb:
 Hi all.
 
 A question about character encoding in shortabstract_en.nt , for example
 http://dbpedia.org/resource/%C4%8C%C3%A1raj%C3%A1vri 
 http://www.w3.org/2000/01/rdf-schema#comment \u010C\u00E1raj\u00E1vri is a 
 lake in the municipality of Kautokeino-Guovdageaidnu in Finnmark county, 
 Norway.@en .
 
 How can %C4%8C be decoded? Obviously it's not Unicode.
 (As a side note: I would really like a UTF-8 only version of all dbpedia 
 files -
 I know some tools need the above tricks, but ...)

That is URL encoding. There should be a urldecode() method available for 
your programming language to reverse the encoding process.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Spaql endpoint and post method

2009-03-31 Thread Jens Lehmann

Hello,

Ahmet YILDIRIM wrote:
 
 Some aspect of the HTTP Request is invalid. Possible problems:
 
* Missing or unknown request method
* Missing URL
* Missing HTTP Identifier (HTTP/1.0)
* Request is too large
* Content-Length missing for POST or PUT requests
* Illegal character in hostname; underscores are not allowed
[...]
 
 Where am i doing wrong?

Did you try to fix all the issues reported by Virtuoso above (by setting
further curl parameters)?

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Spaql endpoint and post method

2009-03-30 Thread Jens Lehmann

Hello,

Ahmet YILDIRIM wrote:
 Hi,
 
 I want to query information using sparql endpoint in a php script.
 I could only use get method to submit my query. I want to use larger
 queries to submit using post method. I tried something with curl
 extension but always got invalid request error.
 
 Anyone did this before? Can you help me?
 
 here what i tried before:
 
 
 $ch = curl_init(http://dbpedia.org/sparql;);
 curl_setopt($ch, CURLOPT_POST  ,1);
 curl_setopt($ch, CURLOPT_POSTFIELDS
 ,'default-graph-uri=http%3A%2F%2Fdbpedia.orgshould-sponge=query='.urlencode($sparql_query).'format=text%2Fxmlldebug=on');
 $sonuc = curl_exec($ch);

Assuming $url is the URL you want (you can test it on the command line,
include the default graph http://dbpedia.org), you could do the following:

$headers = array(Content-Type: .$this-contentType);
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $url);
curl_setopt($c, CURLOPT_HTTPHEADER, $headers);
$contents = curl_exec($c);
curl_close($c);

$contentType can be application/sparql-results+xml,
application/sparql-results+json, text/rdf+n3 etc. depending on what
you need. The result of your query is in $contents.

Kind regards,

Jens


-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Questions about DBpedia and SPARQL

2009-02-09 Thread Jens Lehmann

Hello,

Piero Molino wrote:
 Hi everyone,
 
 my name is Piero Molino and I'm a student form the University of Bary,  
 Computer Science department.
 For my degree thesis I'm working on an ontology based retrieving  
 algorithm and, after seeking for while, i decided to use DBpedia as my  
 multi-domain ontology. Anyway I'm at the real beginning with SPARQL  
 (links to books/tutorials/anything usefull are realy really welcome)  
 and probably what now seems to me to be a problem really isn't: while  
 I figured out how my algorithm would make inference over DBpedia, I'm  
 lacking of the first step (wich isn't really what my thesis is about,  
 so i can reuse someone else approach to it). Basically i have a list  
 of words and i have to map each of them to a DBpedia resource.
 I trying to figure out how i can do it, i thought i could start taking  
 a look at the free text search from the DBpedia website 
 (http://wiki.dbpedia.org/OnlineAccess#h28-8 
 ) but for each example query I get a javascript+html response and an  
 empty result. Isn't it working or it's me i can't undestand the  
 results in the right way?

If you want to get from a string to a set of URIs, you can use the
DBpedia Lookup service:
http://lookup.dbpedia.org/

The API doc is here:
http://lookup.dbpedia.org/api/search.asmx

You could also query the SPARQL endpoint directly using the build in
bif:contains function of Virtuoso.

 Anyway simple text search wouldn't be enought because of  
 disambiguation issues, so i thought i can use Gabrlovich's  ESA 
 (http://www.srcco.de/v/wikipedia-esa 
 ) to retrieve a wikipedia page for each word in the list and then get  
 the DBpedia resource relative to the wikipedia page. Because of my  
 actual lack of expeience over SPARQL i don't know if there is a  
 simplier way to achieve the same result.

Actually, it is very simple to get from Wikipedia URLs to DBpedia URIs
by using the lookup service above or modifying the URLs (which is after
all what we do when extracting data from Wikipedia: we use
http://dbpedia.org/resource/$wikipedia_article_identifier with %2F
replaced by / and %3A replaced by :).

 I would really appreciate if someone could help me both in extendind  
 my SPARQL knowledge and in finding a better and simplier solution for  
 the problem i'm trying to solve.

For general advice on SPARQL documentation, tutorials etc., this
probably isn't the right group (please ask at the W3C Semantic Web
mailing list, but make sure to search the web first).

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] DBpedia 3.2 release, including DBpedia Ontology and RDF links to Freebase

2008-11-18 Thread Jens Lehmann

Hi Chris,

Chris Bizer wrote:
 Hi Hugh and Richard,
 
 interesting discussion indeed. 
 
 I think that the basic idea of the Semantic Web is that you reuse existing
 terms or at least provide mappings from your terms to existing ones.
 
 As DBpedia is often used as an interlinking hub between different datasets
 on the Web, it should in my opinion clearly have a type b) ontology using
 Richard's classification.
 
 But what does this mean for WEB ontology languages?
 
 Looking at the current discussion, I feel reassured that if you want to do
 WEB stuff, you should not move beyond RDFS, even aim lower and only use a
 subset of RDFS (basically only rdf:type, rdfs:subClassOf and
 rdfs:subPropertyOf) plus owl:SameAs. Anything beyond this seems to impose
 too tight restrictions, seems to be too complicated even for people with
 fair Semantic Web knowledge, and seems to break immediately when people
 start to set links between different schemata/ontologies.

I do not fully agree. First of all, let's not forget that we also have
UMBEL and YAGO as two schemata on top of DBpedia data, which do not
impose many restrictions. People are free to use those (in particular
UMBEL is designed to by type b).

Regarding you arguments:

Too tight restrictions: Which ones specifically are too tight? If the
restrictions cause inconsistencies (which they are likely to do at the
moment), then this is a signal a problem in the DBpedia data. (Which is
one of the purposes of imposing restrictions.)

Too complicated: I don't have the impression that the people writing
here have no idea about the meaning of domain and range. Even if this is
the case, no one forces them to use them.

Breaks when you set links: True, so we should be careful in setting
those links to other schemata.

 Dublin Core and FOAF went down this road. And maybe DBpedia should do the
 same (meaning to remove most range and domain restrictions and only keep the
 class and property hierarchy).
 
 Can anybody of the ontology folks tell me convincing use cases where the
 current range and domain restrictions are useful? 

I think there are many of those. First of all, they allow checking
consistency in the DBpedia data. Having consistent data allows to
provide nice user interfaces for DBpedia. Before this release, it was
hardly possible to write a user friendly UI for DBpedia data unless you
restrict yourself to a specific part of the data. One of the other main
problems was/is querying DBpedia. A better structure also helps a lot in
formulating SPARQL queries. We had questions like How do I query the
properties of buildings? etc. on the mailing list. Using the domain
restrictions, you can now easily say which properties you should query
and the range allows you to see what you will get (an integer value, a
string, an instance of a certain class etc.). This probably helps to
make more sophisticated use of Semantic Web structures, then we are
doing now.

 (Validation does not count as WEB ontology languages are not designed for
 validation and XML schema should be used instead if tight validation is
 required).

As a consequence, OWL should never be used for consistency checking?

 If not, I would opt for removing the restrictions.

What is the added value in removing the restrictions?

Kind regards,

Jens


-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] DBpedia 3.2 release, including DBpedia Ontology and RDF links to Freebase

2008-11-18 Thread Jens Lehmann

Hello John,

John Goodwin wrote:
 
 Regarding you arguments:

 Too tight restrictions: Which ones specifically are too 
 tight? If the restrictions cause inconsistencies (which they 
 are likely to do at the moment), then this is a signal a 
 problem in the DBpedia data. (Which is one of the purposes of 
 imposing restrictions.)
 
 I've noticed that properties like father have a domain of British
 Royal or Monarch and I wonder if this is too tight. Would you not save
 yourself headaches in the future by relaxing that restriction to Person?
 For example if you want to add in father information for US presidents
 will you then have to go back and edit your OWL ontology to include US
 presidents in the domain of father. 
 
 Furthermore, I understand disjunctions can be expensive when reasoning
 (not sure if that would be the case in the Dbpedia ontology as it
 doesn't use that much extra OWL). 

You are right. The ontology is automatically created and closely fits 
the data (so at the moment it is indeed too restrictive) and in the 
future this will be done by a community process.

To avoid confusion I believe we have to separate two topics/claims here:

1.) DBpedia should not use domain and range.
2.) Currently, the domain/range restrictions are too restrictive.

Depending on the topic too tight can be understood differently. As far 
as I understand, Chris talks about the first topic, while you talk about 
the second. I don't fully agree with Chris in that matter, i.e. I think 
that providing domain and range adds value to DBpedia. I do agree with 
your point of view that some of the domain and range restrictions are 
too restrictive at the moment. The latter can be fixed. We can do this 
either when we have a user interface for the ontology mappings or before 
(manually).

 I think there are many of those. First of all, they allow 
 checking consistency in the DBpedia data. Having consistent 
 data allows to provide nice user interfaces for DBpedia. 
 
 I'm still not sure how domain and range will help check consistency.
 Don't you need OWL disjoints 

OWL disjoints can (and probably will be) added.

 and other information to find
 inconsistencies, unless of course you check all the inferred types for
 the instances? 

Due to the amount of data any reasoning tasks are challenging, but not 
impossible (maybe a challenge for approximate, incomplete inference 
engines; reasoning with large ABoxes etc.).

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] process dbpedia data

2008-08-21 Thread Jens Lehmann

Hello,

jiusheng chen wrote:
 
 Hi,
 
 I am wondering that if these existing development toolkits (like Jena)
 can handle data of such size. What will happen if I import all dbpedia
 core datasets into Jena? 

Yes, it is possible to load it in Jena (based on an SQL database),
Virtuoso, and Sesame:
http://www4.wiwiss.fu-berlin.de/benchmarks-200801/

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Sitemap for DBpedia 3.1

2008-08-20 Thread Jens Lehmann

Hello,

Richard Cyganiak wrote:
 Hi,
 
 I notice that the sitemap at http://dbpedia.org/sitemap.xml has not  
 yet been updated for DBpedia 3.1.
 
 I would like to provide an updated sitemap.
 
 What changes are required to bring the sitemap up to date? Is it just  
 changing all the download locations from /3.0/ to /3.1/? Or did some  
 of the filenames change? Is the list of dumps that are loaded into  
 Virtuoso any different in 3.1 than in 3.0?

There has been a name change in the yago files and the Geo extractor is
run for all languages. I believe for all other files only the version
number needs to be changed. Below is the list of files loaded into the
official SPARQL endpoint (@Zdravko: copied from your mail on August 8,
please correct me if anything has changed).

Kind regards,

Jens


links_uscensus_en.nt
links_revyu_en.nt
links_quotationsbook_en.nt
links_musicbrainz_en.nt
links_gutenberg_en.nt
links_geonames_en.nt
links_factbook_en.nt
links_eurostat_en.nt
links_dblp_en.nt
links_cyc_en.nt
links_bookmashup_en.nt
infoboxproperties_en.nt
infobox_en.nt
disambiguation_en.nt
categories_label_en.nt
articles_label_en.nt
articlecategories_en.nt
longabstract_pl.nt
longabstract_no.nt
longabstract_nl.nt
longabstract_ja.nt
longabstract_it.nt
longabstract_fr.nt
longabstract_fi.nt
longabstract_es.nt
longabstract_en.nt
longabstract_ru.nt
longabstract_pt.nt
longabstract_zh.nt
longabstract_sv.nt
shortabstract_de.nt
redirect_en.nt
persondata_en.nt
shortabstract_pt.nt
shortabstract_pl.nt
shortabstract_no.nt
shortabstract_nl.nt
shortabstract_ja.nt
shortabstract_it.nt
shortabstract_fr.nt
shortabstract_fi.nt
shortabstract_es.nt
shortabstract_en.nt
wikipage_ja.nt
wikipage_it.nt
wikipage_fr.nt
wikipage_fi.nt
wikipage_es.nt
wikipage_en.nt
wikipage_de.nt
wikicompany_links_en.nt
skoscategories_en.nt
shortabstract_zh.nt
shortabstract_sv.nt
shortabstract_ru.nt
yagolink_en.nt
yago_en.nt
wordnetlink_en.nt
wikipage_zh.nt
wikipage_sv.nt
wikipage_ru.nt
wikipage_pt.nt
wikipage_pl.nt
wikipage_no.nt
wikipage_nl.nt
longabstract_de.nt
image_en.nt
geo_zh.nt
geo_sv.nt
geo_ru.nt
geo_pt.nt
geo_pl.nt
geo_no.nt
geo_nl.nt
geo_ja.nt
geo_it.nt
geo_fr.nt
geo_fi.nt
geo_es.nt
geo_en.nt
geo_de.nt
flickr_en.nt
homepage_fr.nt
homepage_en.nt
homepage_de.nt
externallinks_en.nt

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] where is the YAGO classes data for v3.1?

2008-08-20 Thread Jens Lehmann

Hello,

jiusheng chen wrote:
 
 Hi All,
 
 In previous version like v3.0, we have YAGO classes data, where they are
 in new version? it is said we have improved DBpedia-YAGO mapping a lot,
 I am eager to take a look of it:)

Please go to the download page [1] and search for YAGO. It is at the
bottom of the second list. (It was also loaded in the official SPARQL
endpoint already.)

The data sets might be updated soon, because there are apparently still
some encoding issues.

Kind regards,

Jens

[1] http://wiki.dbpedia.org/Downloads31

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] Announcement: DBpedia 3.1 Release

2008-08-13 Thread Jens Lehmann

Hello,

hereby we announce the 3.1 release of DBpedia.

As always, downloads are available at [1] and the list of changes since
DBpedia 3.0 is in our changelog [2]. Some notable improvements are a
much better YAGO mapping, providing a more complete (more classes
assigned to instances) and accurate (95% accuracy) class hierarchy for
DBpedia. The Geo extractor code has been improved and is now run for all
14 languages. URI validation has switched to the PEAR validation class.

Overall, we now provide 6.0 GB (58,4 GB uncompressed) of downloadable nt
and csv files. The triple count (excluding pagelinks) has surpassed the
100 million barrier and is now at 116.7 million triples, which is an
increase of 27% compared to DBpedia 3.0.

The extraction was performed on a server of the AKSW [3] research group.
I would like to thank Sören Auer, Jörg Schüppel, Chris Bizer, Richard
Cyganiak, Georgi Kobilarov, Christian Becker, the OpenLink team, and all
other contributors for their DBpedia support.

Kind regards,

Jens Lehmann

[1] http://wiki.dbpedia.org/Downloads
[2] http://wiki.dbpedia.org/Changelog
[3] http://aksw.org

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Downtime estimate available?

2008-07-21 Thread Jens Lehmann

Hello Peter,

Peter Ansell schrieb:
 Hi,
 
 Just wondering if there is an estimate on how long http://dbpedia.org/
 will be down for. I am not sure when it first went down but just
 wondering if there is maintenance or something going on for a known
 amount of time.

The wiki itself is hosted on one of our servers in Leipzig. You can
reach it via http://wiki.dbpedia.org. (Usually, http://dbpedia.org
should be a redirect to the Wiki.) The linked data interface has
undergone some changes last week, which may be the reason why it is down
now. We hope OpenLink will fix this issue soon.

Kind regards,

Jens


-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] WordNet links

2008-07-11 Thread Jens Lehmann

Hello,

Csaba Veres schrieb:
 I have only been looking at the WordNet link file for
 a couple of days, but I have already found a number of
 problems. I have posted the errors as bugs on the
 sourceforge bug tracker. But this does not seem to be
 particularly active!!

We have fixed some bugs in the past days and are preparing a new
release. We (DBpedia) still don't get funding, so it is sometimes hard
to find sufficient time.

 Does anyone know the processes responsible for the
 WordNet links, how to suggest changes, etc. etc.??

The tracker is the right place to post bugs and feature requests. It's
indeed a problem to know who is responsible for which data set.
@others: Maybe we should add more information at the bottom of the
download page [1]?

I assigned your reports to Georgi as he may be able help. We can
probably solve the problems you posted manually, but I do not know how
accurate the WordNet links are in general. Apart from this, you can
(with moderate effort) contribute to DBpedia and improve the WordNet
extractor if you like.

Kind regards,

Jens

[1] http://wiki.dbpedia.org/Downloads30

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Finding related or similar entities in DBPedia

2008-06-07 Thread Jens Lehmann

Hello,

Omid Rouhani schrieb:
 
 Basically I want a graph between nodes in DBPedia (Wikipedia).
 
 I be happy for any advice or suggestion regarding what papers to take
 a closer look at if you guys have either worked with this problem
 before or have stumbled upon good papers that are relevant to this
 topic.

You may be interested in the DBpedia relationship finder, which finds
paths in the RDF graph between two objects:
http://wikipedia.aksw.org/relfinder/

Some information about it, can be found here:
http://jens-lehmann.org/files/2007_relfinder.pdf
Of course, knowing the shortest paths between objects is still different
from knowing how similar these objects are.

If you are not looking for paths/graphs, but numbers describing
similarity of resources/objects, then searching for (dis)similarity
measures/metrics for RDF/OWL/Semantic Web will probably bring up a few
results, e.g. this one:
ftp://ftp.cs.wisc.edu/machine-learning/shavlik-group/ilp07wip/ilp07_damato.pdf

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Next DBpedia release ?

2008-06-07 Thread Jens Lehmann

Hello,

robl schrieb:
 Hi,
 
 Just wondering if there was a schedule/roadmap for the next release(s) 
 of the DBpedia dataset ?  It's coming up to half a year since the last 
 one was released and it would be nice to pull in some of the updated 
 data from Wikipedia at some point.

We'll probably start another extraction at the end of June. Christian
Becker wants to update the GeoExtractor soon, the Yago mapping should be
updated, and hopefully a few bugs will be fixed.

 If there isn't going to be one soon, then does anyone have any stats on 
 how long it takes to run the dbpedia extraction scripts (1-2 days) ?  As 
 I'd like to update my local copy of the data.

Depends on your machine and on whether you've imported the Wikipedia
dumps already. If everything runs smoothly, you need about 10 days on an
average computer to import the dumps and extract the data sets. Of
course, you can choose to extract only the data sets and language
versions you need to reduce the runtime of the extraction script.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] DBpedia, Yago Class Hierarchy, and Virtuoso Inferencing

2008-06-06 Thread Jens Lehmann

Hello Kingsley,

Kingsley Idehen schrieb:
 
[...]
 
 Anyway, the live DBpedia has been updated.
 Important Note: The injection of Yago Class Hierarchy rules into the
 current data set is an enhancement to the current DBpedia release.
 Behind the scenes, there is work underway, aimed at providing an
 enhanced Class Hierarchy and associated inference rules using UMBEL
 (which meshes Yago and OpenCyc).
 
 
 Once the next Virtuoso release out,  those of you with local
 installation will be able to do the following via iSQL or the Virtuoso
 Conductor:
[...]

These are excellent news. Kudos to the OpenLink team for making this
possible. :-)

Lightweight reasoning on very large knowledge bases is one of the main
challenges in the Semantic Web area, so this is another step forward.
Enabling inference for DBpedia will (and has already) serve you as a
test bed for assessing Virtuoso performance and stabilising further.

Kind regards,

Jens

-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] Six properties for a person's date of birth

2008-05-29 Thread Jens Lehmann

Hello,

Kingsley Idehen wrote:
 Jens Lehmann wrote:

 
 We are completing a similar test here based on my response earlier this 
 week.

Was the test successful?

 I assume you are trying to load the Yago Class Hierarchy? If so, let us 
 finish our investigation and then we will have a proper report :-)

Yago was already loaded into our local Virtuoso instance at that time. 
Then we used rdfs_rule_set ('http://dbpedia.org', 'http://dbpedia.org'); 
or something similar to enable inferencing, which returned an out of 
memory error after a couple of hours.

Kind regards,

Jens


-- 
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion