[Dbpedia-discussion] DL-Learner 1.0 (Supervised Structured Machine Learning Framework) Released

2015-02-13 Thread Jens Lehmann

Dear all,

the AKSW group [1] is happy to announce DL-Learner 1.0.

DL-Learner is a framework containing algorithms for supervised machine 
learning in RDF and OWL. DL-Learner can use various RDF and OWL 
serialization formats as well as SPARQL endpoints as input, can connect 
to most popular OWL reasoners and is easily and flexibly configurable. 
It extends concepts of Inductive Logic Programming and Relational 
Learning to the Semantic Web in order to allow powerful data analysis.

Website: http://dl-learner.org
GitHub page: https://github.com/AKSW/DL-Learner
Download: https://github.com/AKSW/DL-Learner/releases
ChangeLog: http://dl-learner.org/development/changelog/

DL-Learner is used for data analysis tasks within other tools such as 
ORE [2] and RDFUnit [3]. Technically, it uses refinement operator based, 
pattern based and evolutionary techniques for learning on structured 
data. For a practical example, see [4]. DL-Learner also offers a plugin 
for Protégé [5], which can give suggestions for axioms to add. 
DL-Learner is part of the Linked Data Stack [6] - a repository for 
Linked Data management tools.

We want to thank everyone who helped to create this release, in 
particular (alphabetically) An Tran, Chris Shellenbarger, Christoph 
Haase, Daniel Fleischhacker, Didier Cherix, Johanna Völker, Konrad 
Höffner, Robert Höhndorf, Sebastian Hellmann and Simon Bin. We also 
acknowledge support by the recently started SAKE project, in which 
DL-Learner will be applied to event analysis in manufacturing use cases, 
as well as the GeoKnow [7] and Big Data Europe [8] projects where it is 
part of the respective platforms.

View this announcement on Twitter and the AKSW blog:
   https://twitter.com/dllearner/status/566172443442958336
   http://blog.aksw.org/2015/dl-learner-1-0/

Kind regards,

Lorenz Bühmann, Jens Lehmann and Patrick Westphal

[1] http://aksw.org
[2] http://ore-tool.net
[3] http://aksw.org/Projects/RDFUnit.html
[4] http://dl-learner.org/community/carcinogenesis/
[5] https://github.com/AKSW/DL-Learner-Protege-Plugin
[6] http://stack.linkeddata.org
[7] http://geoknow.eu
[8] http://www.big-data-europe.eu

-- 
Dr. Jens Lehmann
Head of AKSW group, University of Leipzig
Homepage: http://www.jens-lehmann.org
Group: http://aksw.org - semantic web research center
Project: http://geoknow.eu - geospatial data on the web

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


[Dbpedia-discussion] cpf: EXTENDED DEADLINE: ICEMIS 2015, International Conference on Engineering MIS 2015

2015-02-13 Thread Federica Cena

- Our apologies if you receive multiple copies of this CFP -

Important Dates:

Paper Submission Deadline: 1 April, 2015 *NEW!!*
Notification Acceptance: 10 March, 2015
Camera-Ready Papers Due  Registration Deadline: 10 April 2015
Conference Date: 24-26 September, 2015

CALL FOR PAPERS

ICEMIS 2015
The International Conference on Engineering  MIS 2015

September 24-26, 2015
Istanbul, Turkey

ICEMIS 2015 is soliciting original and previously unpublished papers 
addressing research challenges and advances towards a world of wireless,
Applied Engineering, Renewable energy, Mobile, and Multimedia pervasive 
communications.


News:
- Keynote speaker is announced: Prof. Mukesh Mohania (ACM Distinguished 
Scientist and an IEEE Golden Core member).


- Keynote speaker is announced: Prof. Dr. Erol KURT, Gazi University,
Ankara,TURKEY: Prof. Chief editor of International Journal of 
Informatics Technology (IJIT-www.ijit.info) and the field editor of 
Journal of Polytechnic (JoP-www.politeknik.gazi.edu.tr/).


- Keynote speaker is announced: Prof. G.-W. (Willi) Weber, IAM, METU, 
Ankara, Turkey. Advisor to EURO Conferences and Member in numerous 
national OR societies and other scientific organizations, and 
Representative of German OR Society in Turkey.


The International Conference on Engineering  MIS 2015 will take place 
in Istanbul, Turkey, 24-26 September, 2015. The ICEMIS 2015 is organized 
by the International Association of Researchers (IARES), a non-profit 
international association for the researchers. The conference has the 
focus on the frontier topics in the theoretical and applied engineering 
and MIS subjects. The ICEMIS conference serves as good platforms for our 
members and the entire engineering and MIS community to meet with each 
other and to exchange ideas.


All submitted papers will be under peer review and accepted papers will 
be published in the conference proceeding. The abstracts will be indexed 
and available at major academic databases such as BBLP, ACM (Pending). 
All accepted papers will be published in special issues in international 
journals, Including:

1. International Journal of Cloud Applications and Computing
2. International Journal of Informatics Technology
3. Journal of Polytechnic
4. Other International Journals

The ICEMIS is composed of the following 18 tracks - Topics of interest 
include, but are not limited to, the following:
- Bioinformatics  Biology Safety Management, Rubber, Surface Science, 
Systems and Control, Green

Chemistry Forum and any topic related to this track.
- Electrical Engineering and Communications Systems: Communications
Theory: Coding theory and techniques, Fading channels, Multiplexing,
Adaptive modelling, Filtering techniques, Noise reduction,
Transmission diversity, Demodulation, Synchronization, Queuing Theory,
Modulation, Network Management, Communications Protocols, Wireless
Networks, Location-based Services and Positioning, Next Generation
Internet, Network Applications  Services, any topic related to this track.
Educational in ICT: E-learning, Distance Education, Educational
Technology, Virtual education and others.
Robotics: Programming, Software, Hardware, Applications, Tools,
Systems and Technologies.
Web Technologies: Social Networks, Programming Languages, Semantics,
Web Semantic, Mobile Applications, Mobile Engineering, web
applications developments and others.

-Health Advances and Technologies
-Machine Learning
-Modeling,  Simulation
-Engineering Projects Management
-Computing  Applications: OS, Databases, Cloud Computing, Soft
Computing, Genetic Programming, Genetic Algorithm, Swarm Intelligence,
Machine Learning, Pattern Recognition, Fuzzy Systems, Fuzzy Logic,
Neural network, Machine Learning, Multi-Agent Systems, Neural Genetic
Systems, Neural Fuzzy Systems, Pattern Recognition, Time Series
Analysis, Forecasting, Reinforcement Learning, tatistical Data
Analysis, Complex Systems, Evolutionary Systems, Applications of soft
computing to: Control problems, sensor fusion, manufacturing, testing
and diagnostics, semiconductor processing, modeling of complex
systems, approximate reasoning, optimization, management science,
decision analysis, information systems, economics, financial systems,
business applications, forecasting, reliability, cost-benefit
analysis, risk assessments, medical applications, forensic
applications, automotive applications, robotics, automation, etc. New
developments in soft computing: Development of applications or
approaches combining fuzzy logic, neural networks, evolutionary
computation, expert systems, etc.Soft computing software, Hardware
implementations of soft computing.

-Signal Processing Engineering
-Renewable Energy
-Management Information Systems (MIS): e-Commerce vs. e-Business,
Building an e-Business, e-Gov, Designing a web site, elements of a web
site, evaluation and usability, Web site design: Site plan, strategies
and designs, E-commerce Web Site Design: Strategies and Model,
Payments, 

[Dbpedia-discussion] Fwd: Re: [Dbpedia-gsoc] Contributing to DBpedia for GSOC 2015

2015-02-13 Thread Marco Fossati
Forwarding to the mailing list


 Forwarded Message 
Subject: Re: [Dbpedia-gsoc] Contributing to DBpedia for GSOC 2015
Date: Fri, 13 Feb 2015 16:26:09 +0100
From: Marco Fossati hell.j@gmail.com
To: Akshita Jha zenith...@gmail.com

Hi Akshita,

have a look at this page:
https://github.com/dbpedia/extraction-framework/wiki/Warm-up-tasks

Cheers!

On 2/10/15 4:50 PM, Akshita Jha wrote:
 Hi,

 I am a GSOC - 2015 Applicant. I am currently in my 3rd year B.Tech
 (Computer Science). I have a background in NLP, MT, programming and I
 want to contribute to DBpedia. I would be really grateful, if you could
 help me get started.

 --
 Regards,
 Akshita Jha


 --
 Dive into the World of Parallel Programming. The Go Parallel Website,
 sponsored by Intel and developed in partnership with Slashdot Media, is your
 hub for all things parallel software development, from weekly thought
 leadership blogs to news, videos, case studies, tutorials and more. Take a
 look and join the conversation now. http://goparallel.sourceforge.net/



 ___
 Dbpedia-gsoc mailing list
 dbpedia-g...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/dbpedia-gsoc


-- 
Marco Fossati
http://about.me/marco.fossati
Twitter: @hjfocs
Skype: hell_j



--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion


Re: [Dbpedia-discussion] dbpedia extraction parser failure

2015-02-13 Thread Jona Christopher Sahnwaldt
See 
https://github.com/dbpedia/extraction-framework/wiki/Extraction-Instructions#abstract-extraction
( via https://www.google.de/search?q=dbpedia%20abstract%20extraction )

On Fri, Feb 13, 2015 at 8:10 AM, Anupam Mishra
anupam.nihil...@gmail.com wrote:
 Thanks Jona,

 now its working I'm getting results in RDF format.

 I was interested in extraction of abstracts of companies so i used
 # ../run extraction extraction.abstracts.properties

 it is giving exception

 INFO: Error retrieving abstract of
 title=Anarchism;ns=0/Main/;language:wiki=en,locale=en in 3 tries. Giving up.
 Load factor: 0.45
 java.io.FileNotFoundException: http://localhost/mediawiki/api.php
 at
 sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1834)
 at
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1439)
 at
 org.dbpedia.extraction.mappings.AbstractExtractor$$anonfun$retrievePage$1.apply$mcVI$sp(AbstractExtractor.scala:147)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:166)
 at
 org.dbpedia.extraction.mappings.AbstractExtractor.retrievePage(AbstractExtractor.scala:132)
 at
 org.dbpedia.extraction.mappings.AbstractExtractor.extract(AbstractExtractor.scala:86)
 at
 org.dbpedia.extraction.mappings.AbstractExtractor.extract(AbstractExtractor.scala:29)
 at
 org.dbpedia.extraction.mappings.CompositeExtractor$$anonfun$extract$1.apply(CompositeExtractor.scala:14)
 at
 org.dbpedia.extraction.mappings.CompositeExtractor$$anonfun$extract$1.apply(CompositeExtractor.scala:14)
 at
 scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
 at
 scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
 at
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at
 scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at
 scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:252)
 at
 scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
 at
 org.dbpedia.extraction.mappings.CompositeExtractor.extract(CompositeExtractor.scala:14)
 at
 org.dbpedia.extraction.mappings.WikiParseExtractor.extract(WikiParseExtractor.scala:31)
 at
 org.dbpedia.extraction.mappings.WikiParseExtractor.extract(WikiParseExtractor.scala:22)
 at
 org.dbpedia.extraction.mappings.CompositeExtractor$$anonfun$extract$1.apply(CompositeExtractor.scala:14)
 at
 org.dbpedia.extraction.mappings.CompositeExtractor$$anonfun$extract$1.apply(CompositeExtractor.scala:14)
 at
 scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
 at
 scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:252)
 at
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at
 scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at
 scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:252)
 at
 scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
 at
 org.dbpedia.extraction.mappings.CompositeExtractor.extract(CompositeExtractor.scala:14)
 at
 org.dbpedia.extraction.mappings.CompositeParseExtractor.extract(CompositeParseExtractor.scala:54)
 at
 org.dbpedia.extraction.mappings.CompositeParseExtractor.extract(CompositeParseExtractor.scala:11)
 at
 org.dbpedia.extraction.mappings.RootExtractor.apply(RootExtractor.scala:24)
 at
 org.dbpedia.extraction.dump.extract.ExtractionJob$$anonfun$1.apply(ExtractionJob.scala:31)
 at
 org.dbpedia.extraction.dump.extract.ExtractionJob$$anonfun$1.apply(ExtractionJob.scala:26)
 at
 org.dbpedia.extraction.util.SimpleWorkers$$anonfun$apply$1$$anon$2.process(Workers.scala:23)
 at
 org.dbpedia.extraction.util.Workers$$anonfun$1$$anon$1.run(Workers.scala:144)

 On Tue, Feb 10, 2015 at 9:09 PM, Jona Christopher Sahnwaldt
 j...@sahnwaldt.de wrote:

 Hi Anupam,

 Something's wrong with your file.
 enwiki-20150113-pages-articles.xml.bz2 does not exist on
 dumps.wikimedia.org, but enwiki-20150112-pages-articles.xml.bz2 and
 wikidatawiki-20150113-pages-articles.xml.bz2 do.

 Please download the enwiki dump and try again. The best way is to
 adapt download.minimal.properties and extraction.default.properties to
 your needs and then execute

 ../run download config=download.minimal.properties

 and later

 ../run extraction extraction.default.properties


 The warnings you sent imply that the parser is reading the wikidata
 dump file, not the enwiki file.

 The unexpected end of stream error probably means that the file is
 corrupted.

 Regards,
 JC

 On Tue, Feb 10, 2015 at 3:05 PM, Anupam Mishra
 anupam.nihil...@gmail.com wrote:
  Hi All,
 
  I have downloaded DBpedia extraction framework and trying 

Re: [Dbpedia-discussion] URIs vs. other IDs (Was: New user interface for dbpedia.org)

2015-02-13 Thread Kingsley Idehen

On 2/8/15 11:28 AM, Kingsley Idehen wrote:

On 2/7/15 6:07 PM, Markus Kroetzsch wrote:

Hi Kingsley,

We are getting a bit off-topic here, but let me answer briefly ...

On 07.02.2015 21:36, Kingsley Idehen wrote:
...


Not it isn't duplication. Wikipedia HTTP URLs identify Wikipedia
documents. DBpedia URIs identify entities associated with Wikipedia
documents. There's a world of difference here!


That's not my point (I know the difference, of course). Wikidata 
stores neither Wikipedia URLs nor DBpedia URIs. It just stores 
Wikipedia article names together with Wikimedia site (project) 
identifiers. The work to get from there to the URL is the same as the 
work to get to the URI. Storing either explicitly in another property 
value would only introduce redundancy (and potential 
inconsistencies). In a Linked Data export you could easily include 
one or both of these URIs, depending on the application, but it's not 
so clear that doing this in a data viewer would make much sense. 
Surely it would not be useful if people would have to enter all of 
this data manually three times.


On that note, is it the current best practice that all linked data 
exports include links to all other datasets that contain related 
information (exhaustive two-way linking)? That seems like a lot of 
triples and not very feasible if the LOD Web grows (a bit like 
two-way HTML linking ... ;-). Wouldn't it be more practical to 
integrate via shared key values? In this case, Wikipedia URLs might 
be a sensible choice to indicate the topic of a resource, rather than 
requiring all resources that have a Wikipedia article as their topic 
to cross link to all (quadratically many) other such resources 
directly. I would be curious to hear your take on this.






There are similar issues with most of the other identifiers: they are
usually the main IDs of the database, not the URIs of the
corresponding RDF data (if available).


Hmm.. if you look at the identifiers on the viewer's right hand side,
you will find out (depending on you understanding of Linked Open Data
concepts) that they too identify entities that are associated with Web
pages, rather than web pages themselves.


Sure, but you are confusing the purpose of URIs with the underlying 
technical standard here. People use identifiers to refer to entities, 
or course, yet they do not use identifiers that are based on the URI 
standard. We both know about the limitations of this approach, but 
that does not change the shape of the IDs people use to refer to 
things (e.g., on Freebase, but it is the same elsewhere). Usually, if 
you want to interface with such data collections (be it via UIs or 
via APIs), you need to use their official IDs, while URIs are not 
supported.


This is also the answer to your other comment. You are only seeing 
the purpose of the identifier, and you rightly say that there should 
be no big technical issue to use a URI instead. I agree, yet it has 
to be done, and it has to be done differently for each case. There is 
no general rule how to construct URIs from the official IDs used by 
open data collections on today's Web.


A related problem is that most online data sets have UIs that are 
much more user friendly than any LOD browser could be based on the 
RDF they export. There is no incentive for users to click on a 
LOD-based view of, say, IMDB, if they can just go to the IMDB page 
instead. This should be taken into account when building a DBpedia 
LOD view (back on topic! ;-): people who want to learn about 
something will usually be better served by going to Wikipedia; the 
target audience of the viewer is probably a different group who wants 
to inspect the DBpedia data set. This should probably affect how the 
UI is built, and maybe will lead to different design decisions than 
in the Wikidata browser I mentioned.


Markus


Markus,

Cutting a long story real short. Yes, you have industry standard 
identifiers, ditto HTTP URI that identify things in regards to Linked 
Open Data principles.
You simply use relations such as dcterms:identifier (and the like) to 
incorporate industry standard identifiers into an entity description. 
Even better, those relations should be inverse-functional in nature. 
That's really it.


DBpedia Identifiers (HTTP URI based References) and Industry Standard 
Identifiers (typically literal in nature) aren't mutually exclusive.


Getting back on topic, reasonator is a nice UI. What it lacks, from a 
DBpedia perspective, is incorporation of DBpedia URIs which is an 
issue the author of the tool assured me he will be addressing, as a 
high priority.


Follow-up in regards to the above, our biggest concern boils down to 
dealing with the following challenges, which highly impact UI and UX:


1. replacing URIs with object of certain annotation oriented relations 
(rdfs:label, skos:prefLabel, skos:altLabel etc..)
2. page results -- in situations where the number of relations 
associated with an entity description is 

[Dbpedia-discussion] SPARQL on dbpedia extarction

2015-02-13 Thread Anupam Mishra
Hi,

I am interested in extraction of companies details using
SPARQL on
dbpedia dump (enwiki-20150205-pages-articles.xml.bz2)

i executed extraction-default-properties that works fine
but  extraction-abstract-properties not giving any abstracted  data

Please find the below expected query we are looking this kind of result
from dbpedia dump (enwiki-20150205-pages-articles.xml.bz2)

http://dbpedia.org/snorql/?query=SELECT+*+WHERE+{%0D%0A%3Fsubject+rdf%3Atype+%3Chttp%3A%2F%2Fdbpedia.org%2Fontology%2FCompany%3E.%0D%0A%3Fsubject+rdfs%3Alabel+%3Flabel.%0D%0A%3Fsubject+rdfs%3Acomment+%3Fabstract.%0D%0AFILTER+%28lang%28%3Flabel%29+%3D+%22en%22+%26%26+lang%28%3Fabstract%29+%3D+%22en%22%29%0D%0A}+LIMIT+200%0D%0A

SELECT * WHERE {
?subject rdf:type http://dbpedia.org/ontology/Company.
?subject rdfs:label ?label.
?subject rdfs:comment ?abstract.
FILTER (lang(?label) = en  lang(?abstract) = en)
} LIMIT 200



Thanks,
Anupam
--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/___
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion