In general, policies for notability in Wikidata will be governed by the
community of (all) Wikidata editors. On the technical side, we aim to
achieve two things:
* The system should be able to handle a lot of data.
* The interfaces and data access features should minimize the negative
impact
On 04/04/12 23:23, Gregor Hagedorn wrote:
Wikidata can (and probably will) store information about each moon of
Uranus, e.g., its mass. It does probably not make sense to store the mass of
Moons of Uranus if there is such an article. It does not help to know that
the article Moons on Uranus also
Hi Andreas,
thanks for the input. I have drafted the current text about geo-related
datatypes, but I am far from being an expert in this area. Our mapping
expert in Wikidata is Katie (Aude), who has also been working with
OpenStreetMap, but further expert input on this topic would be quite
Martynas,
what you are proposing below is not W3C recommended RDF but an extension
of triples to quads. As far as I know, this extension is not compatible
yet with existing standards such as SPARQL and OWL. Named graphs work
with SPARQL, but are mostly used in another way than you suggest.
On 14/04/12 15:38, Gerard Meijssen wrote:
Hoi,
The Wikidata project is probably the software used by OmegaWiki, the
original Wikidata.
Ah, great, this completes the confusion :-D
Cheers,
Markus
On 14 April 2012 16:12, Jeroen De Dauw jeroended...@gmail.com
mailto:jeroended...@gmail.com
On 12/04/12 21:10, Daniel Kinzler wrote:
This is an interesting criticism, and there's an excellent retort by Denny in
the comments. Just fyi.
Thanks, very good discussion and very good answer by Denny. I should
have a chat with Mark at some point to check out what he thinks about it
(it is
Hi,
I am happy to report that an initial, yet fully functional RDF export
for Wikidata is now available. The exports can be created using the
wda-export-data.py script of the wda toolkit [1]. This script downloads
recent Wikidata database dumps and processes them to create RDF/Turtle
files.
On 04/08/13 13:17, Federico Leva (Nemo) wrote:
Markus Krötzsch, 04/08/2013 12:32:
* Wikidata uses be-x-old as a code, but MediaWiki messages for this
language seem to use be-tarask as a language code. So there must be a
mapping somewhere. Where?
Where I linked it.
Are you sure? The file you
or
traditional?]).
I invite any language experts to look at the file and add
comments/improvements. Some of the issues should possibly also be
considered on the implementation side: we don't want two distinct codes
for the same thing.
Cheers,
Markus
On 04/08/13 16:35, Markus Krötzsch
to a
certain dialect.
See also: http://meta.wikimedia.org/wiki/Special_language_codes
Greetings -- Purodha
*Gesendet:* Sonntag, 04. August 2013 um 19:01 Uhr
*Von:* Markus Krötzsch mar...@semantic-mediawiki.org
*An:* Federico Leva (Nemo) nemow...@gmail.com
*Cc:* Discussion list for the Wikidata project
Dear Adam,
thanks for the pointer. The paper gives an overview of how to design a
wiki-based data curation platform for a specific target community. Some
of the insights could also apply to Wikidata, while other won't transfer
(e.g., you cannot invite the Wikidata community for a
Hi Mingli,
thanks, this very interesting, but I think I need a bit more context to
understand what you are doing and why.
Is your goal to create a library for accessing Wikidata from Clojure
(like a Clojure API for Wikidata)? Or is your goal to use logical
inference over Wikidata and you
On 07/08/13 15:40, Mingli Yuan wrote:
Also, something similar to Magnus' Wiri, here is a bot developed by us
on sina weibo (a twitter-like microblogging provider in China)
* http://weibo.com/n/%E6%9E%9C%E5%A3%B3%E5%A8%98
We use dataset from wikidata with some dirty hacks. It is only a
full triples instead. This would give you a line-by-line export
in (almost) no time (some uses of [...] blocks in object positions would
remain, but maybe you could live with that).
Best wishes,
Markus
All the best,
Sebastian
Am 03.08.2013 23:22, schrieb Markus Krötzsch:
Update: the first
On 10/08/13 10:29, Byrial Jensen wrote:
...
(BTW, the time values seems to be OK again, after many syntax errors in
the beginning. But the coordinate values have some strange (probably
erroneous?) variations: Values where the precision and/or globe is given
as null, and values where the globe
Good morning. I just found a bug that was caused by a bug in the
Wikidata dumps (a value that should be a URI was not). This led to a few
dozen lines with illegal qnames of the form w: . The updated script
fixes this.
Cheers,
Markus
On 09/08/13 18:15, Markus Krötzsch wrote:
Hi Sebastian
) or what? That's what puzzles me. I know that a Wikipedia
can allow multiple languages (or dialects) to coexist, but in the
Wikidata language selector I thought you can only select real
languages, not language groups.
Markus
On 8/6/13, Markus Krötzsch mar...@semantic-mediawiki.org wrote:
Hi
, Markus Krötzsch mar...@semantic-mediawiki.org
wrote:
On 11/08/13 22:29, Tom Morris wrote:
On Sat, Aug 10, 2013 at 2:30 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Anyway, if you restrict yourself to tools that are installed by
default
On 15/08/13 19:33, Jona Christopher Sahnwaldt wrote:
http://www.wikidata.org/entity/Q215607.nt which redirects to
http://www.wikidata.org/wiki/Special:EntityData/Q215607.nt
The RDF stuff at Wikidata is in flux. The RDF you get probably won't
contain all the data that the HTML page shows, and
On 15/08/13 21:38, Dan Brickley wrote:
...
FWIW there's also RDF/XML if you use a *.rdf suffix. This btw is of
great interest to us over in the schema.org http://schema.org project;
earlier today I was showing
http://www.wikidata.org/wiki/Special:EntityData/Q199154.rdf
Hi all,
I think one source of confusion here are the overlapping names of
property datatypes and datavalue types. Basically, the mapping is as
follows right now:
[Format: property type = datavalue type occurring in current dumps]
'wikibase-item' = 'wikibase-entityid'
'string' = 'string'
in this particular case. Maybe we can fix this
somehow in the future when URIs are supported as a value datatype.
Markus
On Thu, Aug 22, 2013 at 11:33 AM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Hi all,
I think one source of confusion here
, schrieb Markus Krötzsch:
If we have an IRI DV, considering that URLs are special IRIs, it seems
clear
that IRI would be the best way of storing them.
The best way of storing them really depends on the storage platform. It
may be a string or something else.
I think the real issue here is that we
Dear Wikidatanions (*),
I have just drafted a little proposal for creating more tools for
external people to work with Wikidata, especially to build services on
top of its data [1]. Your feedback and support is needed.
Idea: Currently, this is quite hard for people, since we only have WDA
Hi -- or better: Heya! -- Lydia:
Congratulations to your new role! This is great news for the project,
which allows Wikidata to proceed on its important mission in perfect
continuity. Denny has made huge contributions to the project in the past
1.5 years -- a task that often involved
longer than I
need. My main problem is sexing Asian authors. Not sure if name-based
approaches are promising there at all.
Markus
On Sun, Oct 13, 2013 at 11:16 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Hi all,
I'd like to share
nicely how to take the effect of time into account.
Markus
On Sun, Oct 13, 2013 at 6:16 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Hi all,
I'd like to share a little Wikidata application: I just used
Wikidata to guess the sex of people
as a suggestion, you can turn these kind of numbers into a probability
distribution using the beta distribution. If you use (1,1) as a prior you
get something like beta(251,1) for the the probability of the probability
that somebody named Aaron is male.
-Original Message-
From: Markus Krötzsch
Sent
Hi Antoine,
The main answer to your questions is that the data model of Wikidata
defines a *data structure* not the *informal meaning* that this data
structure has in an application context (that is: what we, humans, want
to say when we enter it). I try to explain this a bit better below.
On 12/11/13 16:26, Sven Manguard wrote:
Google would not have sent over a large chunk of cash to help get
Wikidata started if it didn't think it could use Wikidata. That Russian
search engine comany would not have sent over a large chunk of cash to
keep Wikidata going if it didn't think it could
Hi,
On a related note, there is also an upcoming project, Wikidata Toolkit
[1], that will look into implementing query functionality over Wikidata
content, not to replace the Wikidata query features but to provide
functionality that is not a top priority for the core development. The
first
On 10/01/14 03:21, emw wrote:
What about monthly/dump-based aggregated property usage statistics?
Property usage statistics would be very valuable, Dimitris. It would
help inform community decisions about how to steer changes in property
usage with less disruption. It would have other
This call is a scam. The conference is not a legit academic event but
aims at making money. It is a sad truth that there is an increasingly
large amount of (more or less) academic conference spam these days. IEEE
has been criticized for sponsoring events without sufficient quality
control [1],
Hi,
On 26/02/14 22:40, Michael Smethurst wrote:
Hello
*Really* not meaning to jump down any http-range-14 rabbit holes but
wasn't there a plan for wikidata to have uris representing things and
pages about those things?
From conversations on this list I sketched a picture a while back of all
Hi ValterVB,
On 04/03/14 20:17, ValterVB wrote:
Hi Markus, it’s an error of my bot (ValterVBot). Thanks to noted them. I
can fix it probably on friday or saturday, source should be Q11920 not
Q11329. Sorry for this problem.
Great, that should be fine.
ValterVB
PS I’m not sure if I reply to
Hi,
Since a few weeks now, no daily dumps have been published for Wikidata.
Only empty directories are created every day. I could not find a related
email on any list I scanned. Can anybody clarify what the situation is now?
Cheers,
Markus
___
On 13/03/14 17:14, Katie Filbert wrote:
On Thu, Mar 13, 2014 at 5:06 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Hi,
Since a few weeks now, no daily dumps have been published for
Wikidata. Only empty directories are created every
Dear all,
There are quite a few Wikidata-related submissions to Wikimania [0]. The
selection of the program committee seems to be based on user votes to
some extent, so don't forget to add your name to the submission pages
you care about :-).
I just added another two:
* How to use Wikidata:
Hi Gerard.
On 09/04/14 10:54, Gerard Meijssen wrote:
Hoi,
What is the relevance of these tools when you have to have specialised
environments to use them ?
Not sure what you mean. Wikidata Toolkit doesn't have any requirements
other than plain old Java to run.
Nevertheless, we'd also like
Hi Eric,
Thanks for all the information. This was very helpful. I only get to
answer now since we have been quite busy building RDF exports for
Wikidata (and writing a paper about it). I will soon announce this here
(we still need to fix a few details).
You were asking about using these
without the added maintenance cost on the data management level.
Cheers,
Markus
On Wed, May 14, 2014 at 2:33 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
I guess there is already a group of people who deal w
Hi Eric
Hi David,
Interesting remark. Let's explore this idea a bit. I will give you two
main reasons why we have properties separate, one practical and one
conceptual.
First the practical point. Certainly, everything that is used as a
property needs to have a datatype, since otherwise the wiki
On 28/05/14 10:37, Daniel Kinzler wrote:
Key differences between Properties and Items:
* Properties have a data type, items don't.
* Items have sitelinks, Properties don't.
* Items have Statements, Properties will support Claims (without sources).
The software needs these
there is no separation).
many thanks for your detailed answer, and sorry if I'm bringing up
already discussed topics. It is just that when you stare long into
wikidata, wikidata stares back into you ;)
Cheers,
Micru
On Wed, May 28, 2014 at 11:39 AM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar
On 28/05/14 15:56, Daniel Kinzler wrote:
Am 28.05.2014 15:05, schrieb Jean-Baptiste Pressac:
Hello,
I am reading the documentation of WikiData where I learned that new properties
could be suggested for discussion. But this means adding knew properties to
WikiData. However, is it possible to use
On Wed, May 28, 2014 at 2:48 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
David,
Regarding the question of how to classify properties and how to
relate them to items:
* same as (in the sense of owl:sameAs) is not the right concept
David,
One of the uses is: what is the relationship between a
human and his behavior?
This is an easy question once you have been clear about what human
behaviour is. According to enwiki, it is a range of behaviours
*exhibited by* humans. The bigger question for me is, whether it is
useful
...@gmail.com
mailto:dacu...@gmail.com:
Markus,
On Thu, May 29, 2014 at 12:53 AM, Markus Krötzsch
mar...@semantic-mediawiki.org
mailto:mar...@semantic-mediawiki.org wrote:
This is an easy question once you have been clear about what
human behaviour is. According
-- these are not the things
we normally have in Wikidata).
Markus
2014-05-29 13:43 GMT+02:00 Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org:
On 29/05/14 12:41, Thomas Douillard wrote:
@David:
I think you should have a look to fuzzy logic
David,
I need to answer to your first assertion separately:
On 29/05/14 01:48, David Cuenca wrote:
Well, our goal it to gather the whole human knowledge, not to use it.
No, that is really not the case. Our goal is to gather carefully
selected parts of the human knowledge. Our community
The other answers, under the original subject:
On 29/05/14 01:48, David Cuenca wrote:
Settled :) Let's leave it at defined as a trait of
I don't think it is very clear what the intention of this property is.
What are the limits of its use? What is it meant to do? Can behaviour
really be a
On 29/05/14 21:04, Andrew Gray wrote:
One other issue to bear in mind: it's *simple* to have properties as a
separate thing. I have been following this discussion with some
interest but... well, I don't think I'm particularly stupid, but most
of it is completely above my head.
Saying here are
On 07/06/14 00:40, Joe Filceolaire wrote:
Well they can ask.
As there is no real definition of what is a city and what the limits of
each city are I'm not sure they will get a useful answer. The population
of the City of London (Q23311), for instance, is only 7,375! Should we
change it from
Dear all,
I am happy to announce the second release of Wikidata Toolkit [1], the
Java library for programming with Wikidata and Wikibase. This release
fixes bugs and improves features of the first release (download, parse,
process Wikidata exports) and it adds new components for serializing
status in
the UK so one would need to have helper items there as well. If we need
new items in either case, the class-based modelling seems nicer since it
fits into the existing class hierarchy as you suggest.
Markus
L.
Il 10/giu/2014 10:21 Markus Krötzsch mar...@semantic-mediawiki.org
On 10/06/14 22:50, Gerard Meijssen wrote:
Hoi,
It is stated that there are no qualifiers included. In one of the
articles you write that it is to be understood that the vailidity of the
information is dependent on the existing qualifiers.
What is the value of these RDF exports with the
On 11/06/14 17:13, Derric Atzrott wrote:
You might also find the new property browser helpful:
http://tools.wmflabs.org/wikidata-exports/miga/
(as mentioned before, requires one of Google Chrome, Safari, Opera, or
Android Browser to work).
While an excellent list and a neat tool, it sadly
]
http://opendata.stackexchange.com/questions/107/when-will-the-wikidata-database-be-available-for-download/
Max Klein
‽ http://notconfusing.com/
On Tue, Jun 10, 2014 at 1:35 AM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Dear all,
I am happy
On 11/06/14 20:49, Bene* wrote:
Am 11.06.2014 17:27, schrieb Markus Krötzsch:
Yes, I know what you mean. I'd love to integrate property group
information into our view as well, but I don't know where to get this
information from (other than by scraping it from the wiki page, which
does not seem
/06/14 14:36, Markus Krötzsch wrote:
Hi all,
We have prepared a new browser for Wikidata Properties:
http://tools.wmflabs.org/wikidata-exports/miga/
It is based on Miga data browser [1]. This means it only works in Google
Chrome/Chromium, Opera, Safari, and the Android Browser
[Including Yaron, the Miga developer, who is not on this list yet]
On 12/06/14 17:21, Thomas Douillard wrote:
Hi Markus, first thanks a lot for these tools.
It would be cool to include a link to the property browser into some
template, ( Template:P' for example , as Template:Q' generates a
Hi Gerard,
On 13/06/14 11:08, Gerard Meijssen wrote:
Hoi,
When you leave out qualifiers, you will find that Ronald Reagan was
never president of the United States and only an actor. Yes, omitting
the statements with qualifiers is wrong but as a consequence the total
of the information is wrong
11:41, Markus Krötzsch mar...@semantic-mediawiki.org
mailto:mar...@semantic-mediawiki.org wrote:
Hi Gerard,
On 13/06/14 11:08, Gerard Meijssen wrote:
Hoi,
When you leave out qualifiers, you will find that Ronald Reagan was
never president of the United States
On 13/06/14 15:52, Bene* wrote:
...
Did I understand you right, Markus, that you leave out all statements
which have at least one qualifier? Wouldn't it make more sense to leave
out the qualifiers only but add the statements without qualifiers
anyway? Because this would solve eg. Gerard's
Gerard,
You sometimes sound as if everything is lost just because somebody put
an RDF file on the Web ;-)
If you don't like the simplified export, why don't you just use our main
export which contains all the data? Can't we all be happy -- the people
who want simple and the people who want
Eric,
Two general remarks first:
(1) Protege is for small and medium ontologies, but not really for such
large datasets. To get SPARQL support for the whole data, you could to
install Virtuoso. It also comes with a simple Web query UI. Virtuoso
does not do much reasoning, but you can use
FYI: this project claims to use Wikidata (among other resources) for
multilingual word-sense disambiguation. One of the first third-party
uses of Wikidata that I am aware of (but other pointers are welcome if
you have them). Wiktionary and OmegaWiki are also mentioned here.
Cheers,
Markus
On 01/07/14 21:47, Lydia Pintscher wrote:
On Tue, Jul 1, 2014 at 9:44 PM, Andy Mabbett a...@pigsonthewing.org.uk wrote:
On 1 July 2014 20:20, Lydia Pintscher lydia.pintsc...@wikimedia.de wrote:
We have just deployed the entity suggester. This helps you with
suggesting properties. So when you
On 01/07/14 22:14, Markus Krötzsch wrote:
...
(2) Grade I listed building
http://tools.wmflabs.org/wikidata-exports/miga/?classes#_cat=Classes/Id=Q15700818
Related properties: English Heritage list number, masts, Minor Planet
Center observatory code, home port, coordinate location, OS grid
On 01/07/14 22:43, Bene* wrote:
Am 01.07.2014 22:23, schrieb Markus Krötzsch:
P.S. One weakness of my algorithm you can already see: it has troubles
estimating the relevance of very rare properties, such as Minor
Planet Center observatory code above. A single wrong annotation may
then lead
On 02/07/14 16:29, David Cuenca wrote:
On Tue, Jul 1, 2014 at 11:07 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
My hope is that with my other suggestion (using P31 values as
features to correlate with), the property suggester
big thumb sizes,
but when file is JPG, don't try to generate a thumb bigger than the
original file or you will get a beautiful error.
Regards
2014-07-02 22:33 GMT+02:00 Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org:
Dear
On 04/07/14 14:49, Magnus Manske wrote:
On Fri, Jul 4, 2014 at 1:40 PM, Scott MacLeod
worlduniversityandsch...@gmail.com
mailto:worlduniversityandsch...@gmail.com wrote:
Jane, Lydia and WikiDatans,
These are great and helpful developments, which seem to be quite far
along now.
On 25/07/14 15:28, David Cuenca wrote:
Worried about the harshness that lately has been developing, I have
started a new initiative to counter that by promoting more dialogue,
civility, and friendliness
https://www.wikidata.org/wiki/Wikidata:Shelter
When you are with friends you don't need to
On 12.08.2014 23:49, Andre Engels wrote:
...
In my opinion, I think manual descriptions should be kept, as opposed to be
deleted completely. Instead, automatically generated descriptions should be
provided when there is no manual description and people should be asked to
override automatically
On 19.08.2014 12:20, Gerard Meijssen wrote:
Hoi,
I cannot parse this ..
What Thomas is saying is that classification (putting things into
categories) and querying (finding things based on certain properties)
can be combined in a natural way. In ontology languages like OWL, you
can make
I guess (many categories could be expressed by
queries to improve results; a gentle, community-led transition will be
possible and preferred; categories won't be switched off just because
Wikidata is switched on).
Cheers,
Markus
Cheers,
Micru
On Tue, Aug 19, 2014 at 12:30 PM, Markus
On 19.08.2014 22:23, David Cuenca wrote:
...
Actually I have one last question :) At the moment Gerard is using is
a list of:value on category item pages which has the effect of being
the inverse of instance of. And then he adds further conditions as
qualifiers, see:
On 19.08.2014 16:13, Lydia Pintscher wrote:
On Tue, Aug 19, 2014 at 11:19 AM, David Cuenca dacu...@gmail.com wrote:
Thanks for the stats, Gerard. Two thoughts:
- With so many items without description I wonder why we don't have the
automatic descriptions gadget enabled by default.
I am a bit
Hi all,
We have a lot of statements saying that something is an instance of a
Wikipedia disambiguation page (Q4167410). Unfortunately, this kind of
information says something about a particular Wikipedia article in a
particular language, and often is not true for other languages.
Moreover,
On 22.08.2014 09:10, Lydia Pintscher wrote:
On Fri, Aug 22, 2014 at 3:35 AM, Legoktm legoktm.wikipe...@gmail.com wrote:
* I feel uncomfortable linking to Facebook/Twitter/etc. on the main page.
Fair enough. How about we make
https://www.wikidata.org/wiki/Wikidata:Social_media prettier and
Hi,
I just updated the data for the Wikidata classes and properties browser
[1] -- was about time -- and added some improvements on the way:
(1) Classes and properties are now always ordered by usage (most used
first), which was not possible to do before. Examples:
** properties related
On 08.09.2014 14:27, Jeroen De Dauw wrote:
Hey,
\o/
Where are the source code and issue tracker for this? Probably good if
those where linked from the tool.
True, but it's not quite in our master branch yet: the code is part of
the extended WDTK examples module, see
On 08.09.2014 14:53, Markus Krötzsch wrote:
...
http://tools.wmflabs.org/wikidata-exports/miga/#_item=1204
That first shows population. When then clicking on the link, you see
the data type is quantity, not string.
Yes, I think this is a bug in how we use IRIs and labels for datatypes
function in the code
that I tweaked to adjust this until it seemed right, but there is no
deeper principle behind this.
Very cool...
Thanks :-)
Markus
-Ben
On Mon, Sep 8, 2014 at 9:24 AM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote
Hi all,
I'd like to share a little tool with you that has been created at a
recent hackathon. Not my work but a nice idea that might inspire others:
AnnoT is a manual text annotation tool where you can, while typing
text into an HTML form, select Wikidata items for some of the words. You
On 09.09.2014 11:33, Daniel Kinzler wrote:
Am 09.09.2014 01:40, schrieb Denny Vrandečić:
Create a third item in Wikidata, and use that for the language links. Any
Wikipedia that has two separate articles can link to the separate items, any
Wikipedia that has only one article can link to the
On 09.09.2014 11:47, Thomas Douillard wrote:
The composite item seems to be a sort of composite geographical/human
system, like an ecosystem (community of living organisms together with
the nonliving components of their environment)
https://www.wikidata.org/wiki/Q37813 a special kind of
On 13.09.2014 21:25, Jeremy Baron wrote:
On Sat, Sep 13, 2014 at 7:23 PM, Denny Vrandečić vrande...@gmail.com wrote:
I am not a lawyer, but if I remember correctly, copyright covers expression,
not content. Since the Wikidata data model and its representation in JSON is
rather unique, an ISBN
Hi,
I fully agree with Thomas and the other replies given here. Let me give
some other views on these topics (partly overlapping with what was said
before). It's important to understand these things to get the subclass
of/instance of thing right -- and it would be extremely useful if we
Dear all:
Those of you active in research may be interested in submitting to a
recently announced special issue of the Journal of Web Semantics that
explicitly refers to Wikidata in its call:
JWS Special Issue on Knowledge Graphs
Hi Cristian,
As Daniel said, the live export is currently somewhat limited. However,
we provide RDF dumps that contain all the data:
http://tools.wmflabs.org/wikidata-exports/rdf/
This shows how the final live exports should also look (more or less),
and it could be a blueprint for somebody
Martynas,
Denny is right. You could set up a Virtuoso endpoint based on our RDF
exports. This would be quite nice to have. That's one important reason
why we created the exports, and I really hope we will soon see this
happening. We are dealing here with a very large project, and the
curious if there is any formal collaboration
(in-place|proposed|possible) between dbpedia and wikidata?
Phil
This message optimized for indexing by NSA PRISM
On Wed, Oct 29, 2014 at 2:34 PM, Markus Krötzsch
mar...@semantic-mediawiki.org wrote:
Martynas,
Denny is right. You could set up
On 30.10.2014 11:49, Cristian Consonni wrote:
2014-10-29 22:59 GMT+01:00 Lydia Pintscher lydia.pintsc...@wikimedia.de:
Help with this would be awesome and totally welcome. The tracking bug
is at https://bugzilla.wikimedia.org/show_bug.cgi?id=48143
Speaking of totally awesome (aehm :D):
* see:
Hi Christian,
Awesome :-) Small note: I just got a Bad Gateway when trying
http://data.wikidataldf.com/ but it now seems to work.
It also seems that some of your post answers the question from my
previous email. That sounds as if it is pretty hard to create HDT
exports (not much surprise
On 31.10.2014 14:51, Cristian Consonni wrote:
2014-10-30 22:40 GMT+01:00 Cristian Consonni kikkocrist...@gmail.com:
Ok, now I have managed to add the Wikidata statements dump too.
And I have added a wikidata.hdt combined dump of all of the above.
Nice. We are running the RDF generation on
On 04.11.2014 18:18, Cristian Consonni wrote:
Hi Markus,
2014-11-01 0:29 GMT+01:00 Markus Krötzsch mar...@semantic-mediawiki.org:
Nice. We are running the RDF generation on a shared cloud environment and I
am not sure we can really use a lot of RAM there. Do you have any guess how
much RAM you
Also, as this seem to be taking longer than expected, I have now also
re-published the Jan 12 and Jan 5 JSON dumps on labs now for your
convenience:
http://tools.wmflabs.org/wikidata-exports/tmp/
Users of Wikidata Toolkit can manually download the file
20150112.json.gz to a subdirectory
Dear Wikidata JSON export team,
There seems to be a sytnax error in the 20120112 JSON file that (I
think) has already been there in the previous dump. So I guess it makes
sense to report it.
In line 9374899, around column 2648 of the 20120112 JSON dump, we find
snaks:[]
Of course, {} would
1 - 100 of 173 matches
Mail list logo