Looks like someone hasn't learned the lesson:
https://www.mail-archive.com/wikidata-l@lists.wikimedia.org/msg02588.html
On Thu, Feb 26, 2015 at 9:27 PM, Lukas Benedix
lukas.bene...@fu-berlin.de wrote:
I second this!
btw: what is the status of the problem with the missing dumps with
history?
RDF but lets only have it
go in production AFTER we know that it works well.
Thanks,
GerardM
On 28 October 2014 02:46, Martynas Jusevičius marty...@graphity.org wrote:
Hey all,
so I see there is some work being done on mapping Wikidata data model
to RDF [1].
Just a thought: what
on RDF, Lets ensure that what we have
works, works well and plan carefully for a better RDF but lets only have it
go in production AFTER we know that it works well.
Thanks,
GerardM
On 28 October 2014 02:46, Martynas Jusevičius marty...@graphity.org wrote:
Hey all,
so I see there is some
is first to happen within our projects and THAT is not so much of a
technical problem at all.
Thanks,
GerardM
On 28 October 2014 11:26, Martynas Jusevičius marty...@graphity.org wrote:
Gerard,
what about query functionality for example? This has been long
promised but shows no real
Hey all,
so I see there is some work being done on mapping Wikidata data model
to RDF [1].
Just a thought: what if you actually used RDF and Wikidata's concepts
modeled in it right from the start? And used standard RDF tools, APIs,
query language (SPARQL) instead of building the whole thing from
Jan,
my suspicion is that my predictions from last year hold true: it is a
far more complex task to design a scalable and performant data model,
query language and/or query engine solely for Wikidata than the
designers of this project anticipated - unless they did anticipate and
now knowingly
Hey Lydia,
how about query access?
Martynas
graphityhq.com
On Wed, Nov 6, 2013 at 6:17 PM, Lydia Pintscher
lydia.pintsc...@wikimedia.de wrote:
Hey everyone,
Progress! We now have the long awaited new search backend up and
running for testing on Wikidata. It will still need some tweaking but
There was a long discussion not so long ago about using established
RDF tools for Wikipedia dumps instead of home-brewed ones, but I guess
someone hasn't learnt the lesson yet.
On Thu, Sep 26, 2013 at 2:22 PM, Kingsley Idehen kide...@openlinksw.com wrote:
All,
See:
Here's my approach to software code problems: we need less of it, not
more. We need to remove domain logic from source code and move it into
data, which can be managed and on which UI can be built.
In that way we can build generic scalable software agents. That is the
way to Semantic Web.
Yes, that is one of the reasons functional languages are getting popular:
https://www.fpcomplete.com/blog/2012/04/the-downfall-of-imperative-programming
With PHP and JavaScript being the most widespread (and still misused)
languages we will not get there soon, however.
On Mon, Jul 8, 2013 at
You probably mean Linked Data?
On Tue, Jun 11, 2013 at 9:41 PM, David Cuenca dacu...@gmail.com wrote:
While on the Hackathon I had the opportunity to talk with some people from
sister projects about how they view Wikidata and the relationship it should
have to sister projects. Probably you are
Hey wikidatians,
occasionally checking threads in this list like the current one, I get
a mixed feeling: on one hand, it is sad to see the efforts and
resources waisted as Wikidata tries to reinvent RDF, and now also
triplestore design as well as XSD datatypes. What's next, WikiQL
instead of
be very much
interested in that.
Cheers,
Denny
2012/12/19 Martynas Jusevičius marty...@graphity.org
Hey wikidatians,
occasionally checking threads in this list like the current one, I get
a mixed feeling: on one hand, it is sad to see the efforts and
resources waisted as Wikidata tries
Denny, the statement-level of granularity you're describing is achieved by
RDF reification. You describe it however as a deprecated mechanism of
provenance, without backing it up.
Why do you think there must be a better mechanism? Maybe you should take
another look at reification, or lower your
options.
All the best,
Sebastian
[1]http://ceur-ws.org/Vol-699/Paper5.pdf
On 06/22/2012 06:20 PM, Martynas Jusevičius wrote:
Denny, the statement-level of granularity you're describing is achieved by
RDF reification. You describe it however as a deprecated mechanism of
provenance, without
for this? I always thought it was exactly the
opposite, i.e. SPARQL2SQL mappers performing better than native stores.
Cheers,
Sebastian
On 06/22/2012 08:43 PM, Martynas Jusevičius wrote:
It says deprecated on the Data model wiki.
So maybe Wikidata doesn't need statement-level granularity
16 matches
Mail list logo