On 9/6/13, Daniel Kinzler dan...@brightbyte.de wrote:
The only thing I'm slightly worried about is the data model and
representation
of the metadata. Swapping one backend for another will only work if they are
conceptually compatible.
The data model I was using was simple key-value pairs.
I'm somewhat of a newb though with extracting microformat style
metadata, so its quite possible there is a better way, or some higher
level parsing library I could use (Something like xpath maybe,
although its not really xml I'm looking at).
I am not really proficient with that either ; but
Hi Brian!
I like the idea of a metadata API very much. Being able to just replace the
scraping backend with Wikidata (as proposed) later seems a good idea. I see no
downside as long as no extra work needs to be done on the templates and
wikitext, and the API could even be used later to port
I'm just throwing some ideas out there, in hope of inspiring you:
Things you might want to consider (at least in the design) of this
API/Extension, might be: multi licensing, derivative and/or 'companion'
linking (subtitle files, cropping etc, the pictured object) and their
copyrights, keeping
I can offer this demo (quickly ported from toolserver, which now refuses to
run it):
http://tools.wmflabs.org/magnustools/commonsapi.php
Far from perfect, but to show what could be done now.
If anyone's interested in helping me develop it, I'll make it a real tool
on Labs.
Cheers,
Magnus
On
This looks great. I know a few sites that are already screenscraping for
our license info, so this will be a huge help for them. I noticed, however,
that the API currently doesn't support the attribution parameter of the
licensing templates (where it specifies the attribution string). I'm sure
On 8/31/13, James Forrester jforres...@wikimedia.org wrote:
However, how much more work would it be to insert it directly into Wikidata
right now? I worry about doing the work twice if Wikidata could take it now
- presumably the hard work is the reliable screen-scraping, and building
the
On 9/1/13, Jean-Frédéric jeanfrederic.w...@gmail.com wrote:
[..]
The downside to this is in order to effectively get metadata out of
commons given the current practises, one essentially has to screen
scrape and do slightly ugly things
This [1] looks quite acrobatic indeed. Can’t we make
On 09/04/2013 09:59 AM, Brian Wolff wrote:
This [1] looks quite acrobatic indeed. Can’t we make better use of the
machine-readable markings provided by templates?
https://commons.wikimedia.org/wiki/Commons:Machine-readable_data
[1]
On 4 sep. 2013, at 18:59, Brian Wolff bawo...@gmail.com wrote:
On 9/1/13, Jean-Frédéric jeanfrederic.w...@gmail.com wrote:
[..]
The downside to this is in order to effectively get metadata out of
commons given the current practises, one essentially has to screen
scrape and do slightly
Hoi,
Wikidata is able to support a subset of properties needed for infoboxes.
The technology is however implemented on several Wikipedias. Recently it
became available for use on Wikivoyage.
The support for interwiki links is well established on both Wikivoyage and
Wikipedia.
Probably much of
Gerard Meijssen wrote:
Wikidata is able to support a subset of properties needed for infoboxes.
The technology is however implemented on several Wikipedias. Recently it
became available for use on Wikivoyage.
The support for interwiki links is well established on both Wikivoyage and
Wikipedia.
Hi Brian,
I've been working on an api module/extension to extract metadata from
commons image description pages, and display it in the API.
Awesome!
The downside to this is in order to effectively get metadata out of
commons given the current practises, one essentially has to screen
Hi all,
I've been working on an api module/extension to extract metadata from
commons image description pages, and display it in the API. I know
this is an area that various people have thought about from time to
time, so I thought it would be of interest to this list.
The specific goals I have:
On 31 August 2013 03:10, Brian Wolff bawo...@gmail.com wrote:
Hi all,
I've been working on an api module/extension to extract metadata from
commons image description pages, and display it in the API. I know
this is an area that various people have thought about from time to
time, so I
James Forrester wrote:
However, how much more work would it be to insert it directly into
Wikidata right now?
I think a parallel question might be: is Wikidata, as a social or
technical project, able and ready to accept such data? I haven't been
following Wikidata's progress too much, but I
On Sun, Sep 1, 2013 at 9:02 AM, MZMcBride z...@mzmcbride.com wrote:
I think a parallel question might be: is Wikidata, as a social or
technical project, able and ready to accept such data? I haven't been
following Wikidata's progress too much, but I thought the focus was
currently infoboxes,
17 matches
Mail list logo