CfP: International Workshop on Semantic Big Data @ ACM SIGMOD 2016

2015-12-17 Thread Sven Groppe
**
CALL FOR PAPERS
International Workshop on Semantic Big Data (SBD 2016)

In conjunction with ACM SIGMOD 2016

1 July 2016, San Francisco, USA
Submission: 15 February 2016

Web: http://www.ifis.uni-luebeck.de/~groppe/sbd
**

** Aims of the Workshop **

The current World-Wide Web enables an easy, instant access to a vast amount of 
online information. However, the content in the Web is typically for human 
consumption, and is not tailored for machine processing. The Semantic Web is 
hence intended to establish a machine-understandable Web, and is currently also 
used in many other domains and not only in the Web. The World Wide Web 
Consortium (W3C) has developed a number of standards around this vision. Among 
them is the Resource Description Framework (RDF), which is used as the data 
model of the Semantic Web. The W3C has also defined SPARQL as RDF query 
language, RIF as rule language, and the ontology languages RDFS and OWL to 
describe schemas of RDF. The usage of common ontologies increases 
interoperability between heterogeneous data sets, and the proprietary 
ontologies with the additional abstraction layer facilitate the integration of 
these data sets. Therefore, we can argue that the Semantic Web is ideally 
designed to work in heterogeneous Big Data environments.

We define Semantic Big Data as the intersection of Semantic Web data and Big 
Data. There are masses of Semantic Web data freely available to the public - 
thanks to the efforts of the linked data initiative. According to 
http://stats.lod2.eu/ the current freely available Semantic Web data is 
approximately 90 billion triples in over 3,300 datasets, many of which are 
accessible via SPARQL query servers called SPARQL endpoints. Everyone can 
submit SPARQL queries to SPARQL endpoints via a standardized protocol, where 
the queries are processed on the datasets of the SPARQL endpoints and the query 
results are sent back in a standardized format. Hence not only Semantic Big 
Data is freely available, but also distributed execution environments for 
Semantic Big Data are freely accessible. This makes the Semantic Web an ideal 
playground for Big Data research.

The goal of this workshop is to bring together academic researchers and 
industry practitioners to address the challenges and report and exchange the 
research findings in Semantic Big Data, including new approaches, techniques 
and applications, make substantial theoretical and empirical contributions to, 
and significantly advance the state of the art of Semantic Big Data.

** Types of Papers **

The workshop solicits papers of different type:
  - Research Papers propose new approaches, theories or techniques related to 
Semantic Big Data including new data structures, algorithms and whole systems. 
They should make substantial theoretical and empirical contributions to the 
research field.
  - Experiments and Analysis Papers focus on the experimental evaluation of 
existing approaches including data structures and algorithms for Semantic Big 
data and bring new insights through the analysis of these experiments. Results 
of Experiments and Analysis Papers can be e.g. showing benefits of well-known 
approaches in new settings and environments, open new research problems by 
demonstrating unexpected behavior or phenomena, or compare a set of traditional 
approaches in an experimental survey.
  - Application Papers report practical experiences on applications of Semantic 
Big Data. Application Papers might describe how to apply Semantic Web 
technologies to specific application domains with big data demands like social 
networks, web search, e-business, collaborative environments, e-learning, 
medical informatics, bioinformatics and geographic information system. 
Application Papers might describe applications using linked data in a new way.
  - Vision Papers identify emerging new or future research issues and 
directions, and describe new research visions having demands for Semantic Big 
Data. The new visions will potentially have great impacts on society.

** Topics of Interest **

We welcome papers on the following topics:
  - Semantic Data Management, Query Processing and Optimization in
- Big Data
- Cloud Computing
- Internet of Things
- Graph Databases
- Federations
- Spatial and Spatio-Temporal Data
  - Evaluation strategies for Semantic Big Data of Rule-based Languages like 
RIF and SWRL
  - Ontology-based Approaches for Modeling, Mapping, Evolution and Real-world 
ontologies in the context of Semantic Big Data
  - Reasoning Approaches (Real-World Applications, Efficient Algorithms) 
especially designed for Semantic Big Data environments
  - Linked Data
- Integration of Heterogeneous Linked Data
- Real-World Applications
- Statistics and Visualizations
- Quality
- Ranking Techniques
- Provenance
- Mining and 

Senior Researcher / PostDoc / PhD Positions at the University of Bonn

2015-12-17 Thread Jens Lehmann
There are several positions available as 1. Akademischer Rat (comparable 
to Assistant Professor), 2. PostDoc and 3. PhD Student at the University 
of Bonn. Please refer to the details for the three types of positions below.


===
1. Akademischer Rat Position at the University of Bonn
===

We are looking for a senior researcher (Akademischer Rat on payment 
scale 100% A13 [1] comparable to Assistant Professor) for 3 years with a 
possible extension to 6 years.


Requirements:

* A completed PhD and Master degree in a relevant field (Computer
  Science or related).

Research and Teaching Areas:

The candidate should have experience in *one of* the following areas (it 
is not required to cover more than one area):


* Big Data, Machine Learning, Data Mining
* Semantic Technologies and Linked Data
* Geospatial data modelling and analysis
* Natural language processing, in particular Question Answering

Depending on previous qualification, the candidate will be responsible 
for courses on Machine Learning, Data Science or Data Engineerung.


We expect:

* Keen interest in top level conference and journal publications
* Responsibility for courses at the Computer Science Institute for 3
  hours per week (4 SWS)
* Experience in acquiring and running research and industry projects
* Co-supervision of PhD, Master- and Bachelor thesis
* Experience in software development and project management
* Interest in transferring research results into practise
* Fluent command of German and English language

We offer:

* You will work at one of the leading [2] German Universities and have
  the opportunity to build your own research team. The goal is to
  perform internationally leading research which can be applied in high
  impact use cases.
* The candidate will be supported with personal resources to the extent
  possible as well as an integration into an international
  collaboration network.
* You will enjoy a close collaboration with Fraunhofer IAIS [3] as a
  leading research institute for large scale machine learning and data
  mining.
* The payment will be 100% A13 for 3 years with a possible extension
  for further 3 years.
* You will get financial support to attend related conferences.

To apply, please send a mail to Martina Doelp (mart...@iai.uni-bonn.de) 
including a CV, two recommendation letters, a PhD certificate and a one 
page motivation letter including a short overview of previous research 
and acquisition activities. Applications are possible until all 
positions have been filled. Please do not send mails larger than 10MB.


Please direct administrative questions to Martina Doelp 
(mart...@iai.uni-bonn.de) and all other questions to Prof. Jens Lehmann 
(jens.lehm...@cs.uni-bonn.de).


The University of Bonn is an equal opportunities employer.

[1] example calculation: 
http://oeffentlicher-dienst.info/c/t/rechner/beamte/nw?id=beamte-nrw=A_13=0=3=100==2015b=1=0=2

[2] https://en.wikipedia.org/wiki/University_of_Bonn#Ranking
[3] http://www.iais.fraunhofer.de

===
2. PostDoc Positions at the University of Bonn
===

We are looking for Postdoctoral Researchers (German: 
Wissenschaftliche(r) Mitarbeiter(in)) at the Computer Science Institute 
at the University of Bonn.


Requirements:

* A completed PhD and Master degree in a relevant field (Computer
  Science or related).
* Proficiency in spoken and written English. Proficiency in German is
  desired.
* Experience in *one of* (not necessarily more) the following areas:
  * Big Data, Machine Learning, Data Mining
  * Semantic Technologies and Linked Data
  * Geospatial data modelling and analysis
  * Natural language processing, in particular Question Answering

We expect:

* Keen interest in top level conference and journal publications
* Co-supervision of PhD, Master- and Bachelor thesis
* Interest in acquiring and running research and industry projects
* Experience in software development and project management
* Interest in transferring research results into practise and
  commercialising them

We offer:

* You will work at one of the leading [1] German Universities and have
  the opportunity to build your own research team. The goal is to
  perform internationally leading research which can be applied in high
  impact use cases.
* You will enjoy a close collaboration with Fraunhofer IAIS [2] as a
  leading research institute for large scale machine learning and data
  mining.
* The payment will be between 50% and 100% TV-L 13 and the contract
  duration between 2 and 4 years depending on previous experience and
  involvement in projects.
* You will get financial support to attend related conferences and the
  possibility to obtain a discounted public transport ticket.

To apply, please 

Join us for the next Protege Short Course at Stanford University, March 21 - 23, 2016!

2015-12-17 Thread Tania Tudorache
*** Apologies for cross-posting! 

Dear all,

We are very happy to announce the next Protege Short Course to be held at 
Stanford University, California between March 21 - 23, 2016.

The Protege Short Course offers a 3-day intensive training in use of the 
Protege toolset, ontology development, and OWL. We cover best practices in 
ontology building and the latest Semantic Web technologies, including OWL 2, 
RDF, and SPARQL. We also cover topics such as real-world applications with 
ontologies, and data access and import from different data sources. The course 
is hands-on and is taught by the members of the Protege team.

Read more about it at:
http://protege.stanford.edu/shortcourse/201603/

If you have any questions about the Protege Short Course, please email:
protege-shortcou...@lists.stanford.edu

Please feel free to forward this announcement to anyone who might be interested 
in the course. Thank you!

We look forward to seeing you next Spring!

Best regards,
The Protege Team



CfP: WWW2016 workshop on Linked Data on the Web (LDOW2016)

2015-12-17 Thread Sören Auer
Hi all,

In case you don't know yet what do in your X-Mas holidays, why not
preparing a submission for the WWW2016 workshop on Linked Data on the
Web (LDOW2016) in Montreal, Canada ;-) The paper submission deadline for
the workshop is  24 January, 2016. Please find the call for papers below.

BTW: LDOW now also accepts HTML5+RDFa submissions according to the
Linked Research principles: https://github.com/csarven/linked-research
with embedded semantic and interactive content.

Looking forward seeing you at LDOW2016 in Montreal!

Cheers,

Sören Chris, Tim, and Tom




  Call for Papers: 9th Workshop on Linked Data on the Web (LDOW2016)


 Co-located with 25th International World Wide Web Conference
April 11 to 15, 2016 in Montreal, Canada


   http://events.linkeddata.org/ldow2016/



The Web is developing from a medium for publishing textual documents
into a medium for sharing structured data. This trend is fueled on the
one hand by the adoption of the Linked Data principles by a growing
number of data providers. On the other hand, large numbers of websites
have started to semantically mark up the content of their HTML pages and
thus also contribute to the wealth of structured data available on the Web.

The 9th Workshop on Linked Data on the Web (LDOW2016) aims to stimulate
discussion and further research into the challenges of publishing,
consuming, and integrating structured data from the Web as well as
mining knowledge from the global Web of Data. The special focus of
this year’s LDOW workshop will be Web Data Quality Assessment and Web
Data Cleansing.


*Important Dates*

* Submission deadline: 24 January, 2016 (23:59 Pacific Time)
* Notification of acceptance: 10 February, 2016
* Camera-ready versions of accepted papers: 1 March, 2016
* Workshop date: 11-13 April, 2016


*Topics of Interest*

Topics of interest for the workshop include, but are not limited to, the
following:

Web Data Quality Assessment
* methods for evaluating the quality and trustworthiness of web data
* tracking the provenance of web data
* profiling and change tracking of web data sources
* cost and benefits of web data quality assessment
* web data quality assessment benchmarks

Web Data Cleansing
* methods for cleansing web data
* data fusion and truth discovery
* conflict resolution using semantic knowledge
* human-in-the-loop and crowdsourcing for data cleansing
* cost and benefits of web data cleansing
* web data quality cleansing benchmarks

Integrating Web Data from Large Numbers of Data Sources
* linking algorithms and heuristics, identity resolution
* schema matching and clustering
* evaluation of linking and schema matching methods

Mining the Web of Data
* large-scale derivation of implicit knowledge from the Web of Data
* using the Web of Data as background knowledge in data mining
* techniques and methodologies for Linked Data mining and analytics

Linked Data Applications
* application showcases including Web data browsers and search engines
* marketplaces, aggregators and indexes for Web Data
* security, access control, and licensing issues of Linked Data
* role of Linked Data within enterprise applications (e.g. ERP, SCM,CRM)
* Linked Data applications for life-sciences, digital humanities, social
sciences etc.


*Submissions*

We seek two kinds of submissions:

  1. Full scientific papers: up to 10 pages in ACM format
  2. Short scientific and position papers: up to 5 pages in ACM format

Submissions must be formatted using the ACM SIG template available at
http://www.acm.org/sigs/publications/proceedings-templates or in HTML5
e.g. according to the Linked Research
(https://github.com/csarven/linked-research) principles.

For authoring submission according to the Linked Research principles
authors can use dokieli (https://github.com/linkeddata/dokieli) - a
decentralized authoring and annotation tooling. HTML5 papers can be
submitted by either providing an URL to their paper (in HTML+RDFa, CSS,
JavaScript etc.) with supporting files, or an archived zip file
including all the material.

Accepted papers will be presented at the workshop and included in the
CEUR workshop proceedings. At least one author of each paper has to
register for the workshop and to present the paper.


*Organizing Committee*

 Christian Bizer, University of Mannheim, Germany
 Tom Heath, Open Data Institute, UK
 Sören Auer, University of Bonn and Fraunhofer IAIS, Germany
 Tim Berners-Lee, W3C/MIT, USA


*Contact Information*

For further information about the workshop, please contact the workshops
chairs at:  ldow2...@events.linkeddata.org


-- 
Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de/SoerenAuer

Fraunhofer-Institute Intelligent Analysis & Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de/Auer.html

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer