Héllo,
I am very happy of this news.
I a wiki newbie interested in using wikidata to do text analysis.
I try to follow the discussion here and on french wiktionary.
I take this as opportunity to try to sum up some concerns that are
raised on french wiktionary [0]:
- How wikidata and
Héllo,
I am developping a graph database using Python. The point in that
is to have an easy to setup database that can handle bigger than
RAM dataset like wikidata. Right now there is no such database
available except in the Java world via Noe4j embedded.
AjguDB [0] is a graphdb library I've
On 2016-12-31 11:57, Lydia Pintscher wrote:
Folks,
We're now officially mainstream ;-)
https://www.buzzfeed.com/katiehasty/song-ends-melody-lingers-in-2016?utm_term=.nszJxrKqR#.sknE4nVAg
Neat!
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
What wikidata doesn't track the license of each piece of information?!
--
Amirouche ~ amz3 ~ http://www.hyperdev.fr
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Hello,
I am investigating with several people other the rainbow in GNU project as
part of guix [0].
Our goal is to make our packages easier to discover by our users via
full-text search or structured queries.
Questions:
a) I see Arch and Debian have properties. What would it take to have a
Le jeu. 6 juin 2019 à 21:33, Guillaume Lederrey a
écrit :
> Hello all!
>
> There has been a number of concerns raised about the performance and
> scaling of Wikdata Query Service. We share those concerns and we are
> doing our best to address them. Here is some info about what is going
> on:
>
>
Hello all,
Le mar. 4 juin 2019 à 15:46, Marielle Volz a
écrit :
> Yes, the api is at
> https://www.wikidata.org/w/api.php?action=query=search=Bush
>
> There's a sandbox where you can play with the various options:
>
>
Le dim. 9 juin 2019 à 23:18, Amirouche Boubekki <
amirouche.boube...@gmail.com> a écrit :
> I made a proposal for a grant at
> https://meta.wikimedia.org/wiki/Grants:Project/WDQS_On_FoundationDB
>
> Mind the fact that this is not about the versioned quadstore. It is about
&g
Hello Sebastian and Stas,
Le mer. 12 juin 2019 à 19:27, Amirouche Boubekki <
amirouche.boube...@gmail.com> a écrit :
> Hello Sebastian,
>
> First thanks a lot for the reply. I started to believe that what I was
> saying was complete nonsense.
>
> Le mer. 12 juin 2019 à 1
I created another draft proposal to create a *prototype* to scale wikidata,
using the tools I have been building, that goes beyond only scaling
WikiData Query Service. The first quarter should be reserved to WDQS.
As you might have seen, the first proposal
I made a proposal for a grant at
https://meta.wikimedia.org/wiki/Grants:Project/WDQS_On_FoundationDB
Mind the fact that this is not about the versioned quadstore. It is about
simple triplestore, it mainly missing bindings for foundationdb and SPARQL
syntax.
Also, I will prolly need help to
Hello Sebastian,
First thanks a lot for the reply. I started to believe that what I was
saying was complete nonsense.
Le mer. 12 juin 2019 à 16:51, Sebastian Hellmann <
hellm...@informatik.uni-leipzig.de> a écrit :
> Hi Amirouche,
> On 12.06.19 14:07, Amirouche Boubekki wro
Le mer. 12 juin 2019 à 19:11, Stas Malyshev a
écrit :
> Hi!
>
> >> So there needs to be some smarter solution, one that we'd unlike to
> > develop inhouse
> >
> > Big cat, small fish. As wikidata continue to grow, it will have specific
> > needs.
> > Needs that are unlikely to be solved by
his is coming from a Wikidata lurker who just happens to like
> the Freebase approach to versioning of knowledge graphs, similar to what
> you have described.
>
> Josh
>
>
> On Sat, May 4, 2019 at 5:49 AM Amirouche Boubekki <
> amirouche.boube...@gmail.com> wrote:
>
GerardM post triggered my interest to post to the mailing list. As you
might know I am working on functional quadstore that is quadstore that
keeps around old version of data, like a wiki but in direct-acyclic-graph.
It only stores differences between commits. It rely on snapshot of the
latest
ht now, I am working on getting it all together.
https://github.com/awesome-data-distribution/datae/tree/master/docs/SCHEME20XY#abstract
On Fri, May 3, 2019 at 8:19 PM Amirouche Boubekki <
> amirouche.boube...@gmail.com> wrote:
>
>> GerardM post triggered my interest to post to
Le dim. 8 déc. 2019 à 18:52, Amirouche Boubekki
a écrit :
>
> I am very pleased to announce the immediate availability of nomunofu.
>
> nomunofu is database server written in GNU Guile that is powered by
> WiredTiger ordered key-value store.
>
> It allows to store and que
I am pleased to share with you the v0.1.4 binary release. It contains
the following improvements:
- The REST API takes JSON as input, which will make it easier to
create clients in other programming languages;
- The REST API takes limit and offset as query string. The maximum
limit is 1000;
-
I am very pleased to announce the immediate availability of nomunofu.
nomunofu is database server written in GNU Guile that is powered by
WiredTiger ordered key-value store.
It allows to store and query triples. The goal is to make it much easier,
definitely faster to query as big as possible
Hello all,
Le ven. 4 oct. 2019 à 09:58, Thomas Francart
a écrit :
>
> Hello
>
> I understand the wikidata SPARQL label service only fetches the labels, but
> does not allow to search/filter on them; labels are also available in
> regulare rdfs:label on which a FILTER can be made.
See Etorre
Hello all!
Le mar. 17 déc. 2019 à 18:15, Aidan Hogan a écrit :
>
> Hey all,
>
> As someone who likes to use Wikidata in their research, and likes to
> give students projects relating to Wikidata, I am finding it more and
> more difficult to (recommend to) work with recent versions of Wikidata
>
Hello all ;-)
I ported the code to Chez Scheme to do an apple-to-apple comparison
between GNU Guile and Chez and took the time to launch a few queries
against Virtuoso available in Ubuntu 18.04 (LTS).
Spoiler: the new code is always faster.
The hard disk is SATA, and the CPU is dubbed:
Le dim. 22 déc. 2019 à 21:23, Kingsley Idehen a écrit :
>
> On 12/22/19 4:17 PM, Kingsley Idehen wrote:
>
> On 12/22/19 3:17 PM, Amirouche Boubekki wrote:
>
> Hello all ;-)
>
>
> I ported the code to Chez Scheme to do an apple-to-apple comparison
> between GNU Gu
Hello all,
I would like to know what you think of the proposal I made at:
https://meta.wikimedia.org/wiki/Grants:Project/Future-proof_WDQS
Like I said previously on wikidata mailing list, I can address the
following problems related to WDQS:
> In an ideal world, WDQS should:
>
> * scale in
Hello Guillaume,
Le ven. 7 févr. 2020 à 14:33, Guillaume Lederrey
a écrit :
>
> Hello all!
>
> First of all, my apologies for the long silence. We need to do better in
> terms of communication. I'll try my best to send a monthly update from now
> on. Keep me honest, remind me if I fail.
>
It
omes up with another sharding strategy, how will edits that
span multiple regions will happen?
How will it make entering the wikidata party easier?
I dare to write in the open: it seems to me like we are witnessing
"Earth is flat vs. Earth is not flat" kind of event.
Thanks for the reply!
Le lun. 13 juil. 2020 à 21:22, Adam Sanchez a écrit :
>
> I have 14T SSD (RAID 0)
>
> Le lun. 13 juil. 2020 à 21:19, Amirouche Boubekki
> a écrit :
> >
> > Le lun. 13 juil. 2020 à 19:42, Adam Sanchez a écrit
> > :
> > >
> > > Hi,
>
Le lun. 13 juil. 2020 à 19:42, Adam Sanchez a écrit :
>
> Hi,
>
> I have to launch 2 million queries against a Wikidata instance.
> I have loaded Wikidata in Virtuoso 7 (512 RAM, 32 cores, SSD disks with RAID
> 0).
> The queries are simple, just 2 types.
How much SSD in Gigabytes do you have?
Hello,
I would like to know if you are interested by the proposal I made at:
https://meta.wikimedia.org/wiki/Grants:Project/WDQS_On_FoundationDB
Like I said previously on wikidata mailing list, I can address the
following problems related to WDQS:
In an ideal world, WDQS should:
* scale in
Hello all,
I would like to know what you think of the proposal I made at:
https://meta.wikimedia.org/wiki/Grants:Project/Future-proof_WDQS
Like I said previously on wikidata mailing list, I can address the
following problems related to WDQS:
> In an ideal world, WDQS should:
>
> * scale in
Checkout my proposal at
https://meta.wikimedia.org/wiki/Grants:Project/Future-proof_WDQS
I started working a paper (more will follow) that will document and
support my work, see
https://en.wikiversity.org/wiki/WikiJournal_Preprints/Generic_Tuple_Store#Future-proof_WDQS
Happy Holydays ;-)
Le jeu. 11 juin 2020 à 11:13, David Causse a écrit :
>
> Hi,
>
> did you "munge"[0] the dumps prior to loading them?
> As a comparison, loading the munged dump on a WMF production machine (128G,
> 32cores, SSD drives) takes around 8days.
>
> 0:
32 matches
Mail list logo