Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-18 Thread John Erling Blad
Tried a couple of times to rewrite this, but it grows out of bound
anyhow. Seems like it has its own life.

There is a book from 2000 by Robert Dale and Ehud Reiter; Building
natural language generation systems  ISBN 978-0-521-02451-8

Wikibase items can be rebuilt as Plans from the type statement
(top-down) or as Constituents from the other statements (bottom-up).
The two models does not necessarily agree. This is although only the
overall document structure, and organizing of the data, and it leaves
out the really hard part – the language specific realization.

You can probably redefine Plans and Constituents as entities, I have
toyed around with them as Lua classes, and put them into Wikidata. The
easiest way to reuse them locally would be to use a lookup structure
for fully or partly canned text, and define rules for agreement and
inflection as part of these texts. Piecing together canned text is
hard, but easier than building full prose from the bottom. It is
possible to define a very low-level realization for some languages,
but that is a lot harder.

The idea for lookup of canned text is to use the text that covers most
of the available statements, but still such that most of the remaining
statements can also be covered. That is some kind of canned text might
not support a specific agreement rule, thus some other canned text can
not reference it and less coverage is achieved. For example the
direction to the sea can not be expressed in a canned text for Finnish
and then the distance can not reference the direction.

To get around this I prioritized Plans and Constituents, with those
having higher priority being put first. What a person is known for
should go in front of his other work. I ordered the Plans and
Constituents chronologically to maintain causality. This can also be
called sorting. Priority tend to influence plans, and order influence
constituents. Then there are grouping, which keeps some statements
together.  Length, width, height are typically a group.

A lake can be described with individual canned text for length, width,
and height, but those are given low priority. Then it an be made a
canned text for length and height, with somewhat higher priority. An
even higher priority can be given to a canned text for all three.
Given that all three statements are available then the composite
canned text for all of them will be used. If only some of them exist
then a lower priority canned text will be used.

Note that the book use "canned text" a little different.

Also note that the canned texts can be translated as ordinary message
strings. They can also be defined as a kind of entities in Wikidata.
As ordinary message strings they need additional data, but that comes
naturally as entities in Wikidata. My drodling put it inside each
Wikipedia, as it would be easier to reuse from Lua-modules. (And yes,
you can then override part of the ArticlePlaceholder to show the text
at the special page.)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-15 Thread Denny Vrandečić
Cool, thanks! I read this a while ago, rereading again.

On Tue, Jan 15, 2019 at 3:28 AM Sebastian Hellmann <
hellm...@informatik.uni-leipzig.de> wrote:

> Hi all,
>
> let me send you a paper from 2013, which might either help directly or at
> least to get some ideas...
>
> A lemon lexicon for DBpedia, Christina Unger, John McCrae, Sebastian
> Walter, Sara Winter, Philipp Cimiano, 2013, Proceedings of 1st
> International Workshop on NLP and DBpedia, co-located with the 12th
> International Semantic Web Conference (ISWC 2013), October 21-25, Sydney,
> Australia
>
> https://github.com/ag-sc/lemon.dbpedia
>
> https://pdfs.semanticscholar.org/638e/b4959db792c94411339439013eef536fb052.pdf
>
> Since the mappings from DBpedia to Wikidata properties are here:
> http://mappings.dbpedia.org/index.php?title=Special:AllPages=202
> e.g. http://mappings.dbpedia.org/index.php/OntologyProperty:BirthDate
>
> You could directly use the DBpedia-lemon lexicalisation for Wikidata.
>
> The mappings can be downloaded with
>
> git clone https://github.com/dbpedia/extraction-framework ; cd core ;
> ../run download-mappings
>
>
> All the best,
>
> Sebastian
>
>
>
>
> On 14.01.19 18:34, Denny Vrandečić wrote:
>
> Felipe,
>
> thanks for the kind words.
>
> There are a few research projects that use Wikidata to generate parts of
> Wikipedia articles - see for example https://arxiv.org/abs/1702.06235 which
> is almost as good as human results and beats templates by far, but only for
> the first sentence of biographies.
>
> Lucie Kaffee has also quite a body of research on that topic, and has
> worked very succesfully and tightly with some Wikipedia communities on
> these questions. Here's her bibliography:
> https://scholar.google.com/citations?user=xiuGTq0J=de
>
> Another project of hers is currently under review for a grant:
> https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-resourced_Wikipedia_Editors_in_Creating_New_Articles
> - I would suggest to take a look and if you are so inclined to express
> support. It is totally worth it!
>
> My opinion is that these projects are great for starters, and should be
> done (low-hanging fruits and all that), but won't get much further at least
> for a while, mostly because Wikidata rarely offers more than a skeleton of
> content. A decent Wikipedia article will include much, much more content
> than what is represented in Wikidata. And if you only use that for input,
> you're limiting yourself too much.
>
> Here's a different approach based on summarization over input sources:
> https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/
>  -
> this has a more promising approach for the short- to mid-term.
>
> I still maintain that the Abstract Wikipedia approach has certain
> advantages over both learned approaches, and is most aligned with Lucie's
> work. The machine learned approaches always fall short on the dimension of
> editability, due to the black-boxness of their solutions.
>
> Also, furthermore, agree to Jeblad.
>
> Remains the question, why is there not more discussion? Maybe because
> there is nothing substantial to discuss yet :) The two white papers are
> rather high level and the idea is not concrete enough yet, so that I
> wouldn't expect too much discussion yet going on on-wiki. That was similar
> to Wikidata - the number who discussed Wikidata at this level of maturity
> was tiny, it increased considerably once an actual design plan was
> suggested, but still remained small - and then exploded once the system was
> deployed. I would be surprised and delighted if we managed to avoid this
> pattern this time, but I can't do more than publicly present the idea,
> announce plans once they are there, and hope for a timely discussion :)
>
> Cheers,
> Denny
>
>
> On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad  wrote:
>
>> An additional note; what Wikipedia urgently needs is a way to create
>> and reuse canned text (aka "templates"), and a way to adapt that text
>> to data from Wikidata. That is mostly just inflection rules, but in
>> some cases it involves grammar rules. To create larger pieces of text
>> is much harder, especially if the text is supposed to be readable.
>> Jumbling sentences together as is commonly done by various botscripts
>> does not work very well, or rather, it does not work at all.
>>
>> On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad 
>> wrote:
>> >
>> > Using an abstract language as an basis for translations have been
>> > tried before, and is almost as hard as translating between two common
>> > languages.
>> >
>> > There are two really hard problems, it is the implied references and
>> > the cultural context. An artificial language can get rid of the
>> > implied references, but it tend to create very weird and unnatural
>> > expressions. If the cultural context is removed, then it can be
>> > extremely hard to put it back in, and without any cultural context it
>> > can be hard to explain 

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-15 Thread Sebastian Hellmann

Hi all,

let me send you a paper from 2013, which might either help directly or 
at least to get some ideas...


A lemon lexicon for DBpedia, Christina Unger, John McCrae, Sebastian 
Walter, Sara Winter, Philipp Cimiano, 2013, Proceedings of 1st 
International Workshop on NLP and DBpedia, co-located with the 12th 
International Semantic Web Conference (ISWC 2013), October 21-25, 
Sydney, Australia


https://github.com/ag-sc/lemon.dbpedia
https://pdfs.semanticscholar.org/638e/b4959db792c94411339439013eef536fb052.pdf

Since the mappings from DBpedia to Wikidata properties are here: 
http://mappings.dbpedia.org/index.php?title=Special:AllPages=202 
e.g. http://mappings.dbpedia.org/index.php/OntologyProperty:BirthDate


You could directly use the DBpedia-lemon lexicalisation for Wikidata.

The mappings can be downloaded with |
|

|git clone https://github.com/dbpedia/extraction-framework ; cd core ; 
../run download-mappings

|


All the best,

Sebastian




On 14.01.19 18:34, Denny Vrandečić wrote:

Felipe,

thanks for the kind words.

There are a few research projects that use Wikidata to generate parts 
of Wikipedia articles - see for example 
https://arxiv.org/abs/1702.06235 which is almost as good as human 
results and beats templates by far, but only for the first sentence of 
biographies.


Lucie Kaffee has also quite a body of research on that topic, and has 
worked very succesfully and tightly with some Wikipedia communities on 
these questions. Here's her bibliography: 
https://scholar.google.com/citations?user=xiuGTq0J=de


Another project of hers is currently under review for a grant: 
https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-resourced_Wikipedia_Editors_in_Creating_New_Articles 
- I would suggest to take a look and if you are so inclined to express 
support. It is totally worth it!


My opinion is that these projects are great for starters, and should 
be done (low-hanging fruits and all that), but won't get much further 
at least for a while, mostly because Wikidata rarely offers more than 
a skeleton of content. A decent Wikipedia article will include much, 
much more content than what is represented in Wikidata. And if you 
only use that for input, you're limiting yourself too much.


Here's a different approach based on summarization over input sources: 
https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/ - 
this has a more promising approach for the short- to mid-term.


I still maintain that the Abstract Wikipedia approach has certain 
advantages over both learned approaches, and is most aligned with 
Lucie's work. The machine learned approaches always fall short on the 
dimension of editability, due to the black-boxness of their solutions.


Also, furthermore, agree to Jeblad.

Remains the question, why is there not more discussion? Maybe because 
there is nothing substantial to discuss yet :) The two white papers 
are rather high level and the idea is not concrete enough yet, so that 
I wouldn't expect too much discussion yet going on on-wiki. That was 
similar to Wikidata - the number who discussed Wikidata at this level 
of maturity was tiny, it increased considerably once an actual design 
plan was suggested, but still remained small - and then exploded once 
the system was deployed. I would be surprised and delighted if we 
managed to avoid this pattern this time, but I can't do more than 
publicly present the idea, announce plans once they are there, and 
hope for a timely discussion :)


Cheers,
Denny


On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad > wrote:


An additional note; what Wikipedia urgently needs is a way to create
and reuse canned text (aka "templates"), and a way to adapt that text
to data from Wikidata. That is mostly just inflection rules, but in
some cases it involves grammar rules. To create larger pieces of text
is much harder, especially if the text is supposed to be readable.
Jumbling sentences together as is commonly done by various botscripts
does not work very well, or rather, it does not work at all.

On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad
mailto:jeb...@gmail.com>> wrote:
>
> Using an abstract language as an basis for translations have been
> tried before, and is almost as hard as translating between two
common
> languages.
>
> There are two really hard problems, it is the implied references and
> the cultural context. An artificial language can get rid of the
> implied references, but it tend to create very weird and unnatural
> expressions. If the cultural context is removed, then it can be
> extremely hard to put it back in, and without any cultural
context it
> can be hard to explain anything.
>
> But yes, you can make an abstract language, but it won't give
you any
> high quality prose.
>
> On Mon, Jan 14, 2019 at 8:09 AM Felipe 

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-14 Thread Denny Vrandečić
Felipe,

thanks for the kind words.

There are a few research projects that use Wikidata to generate parts of
Wikipedia articles - see for example https://arxiv.org/abs/1702.06235 which
is almost as good as human results and beats templates by far, but only for
the first sentence of biographies.

Lucie Kaffee has also quite a body of research on that topic, and has
worked very succesfully and tightly with some Wikipedia communities on
these questions. Here's her bibliography:
https://scholar.google.com/citations?user=xiuGTq0J=de

Another project of hers is currently under review for a grant:
https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-resourced_Wikipedia_Editors_in_Creating_New_Articles
- I would suggest to take a look and if you are so inclined to express
support. It is totally worth it!

My opinion is that these projects are great for starters, and should be
done (low-hanging fruits and all that), but won't get much further at least
for a while, mostly because Wikidata rarely offers more than a skeleton of
content. A decent Wikipedia article will include much, much more content
than what is represented in Wikidata. And if you only use that for input,
you're limiting yourself too much.

Here's a different approach based on summarization over input sources:
https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/
-
this has a more promising approach for the short- to mid-term.

I still maintain that the Abstract Wikipedia approach has certain
advantages over both learned approaches, and is most aligned with Lucie's
work. The machine learned approaches always fall short on the dimension of
editability, due to the black-boxness of their solutions.

Also, furthermore, agree to Jeblad.

Remains the question, why is there not more discussion? Maybe because there
is nothing substantial to discuss yet :) The two white papers are rather
high level and the idea is not concrete enough yet, so that I wouldn't
expect too much discussion yet going on on-wiki. That was similar to
Wikidata - the number who discussed Wikidata at this level of maturity was
tiny, it increased considerably once an actual design plan was suggested,
but still remained small - and then exploded once the system was deployed.
I would be surprised and delighted if we managed to avoid this pattern this
time, but I can't do more than publicly present the idea, announce plans
once they are there, and hope for a timely discussion :)

Cheers,
Denny


On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad  wrote:

> An additional note; what Wikipedia urgently needs is a way to create
> and reuse canned text (aka "templates"), and a way to adapt that text
> to data from Wikidata. That is mostly just inflection rules, but in
> some cases it involves grammar rules. To create larger pieces of text
> is much harder, especially if the text is supposed to be readable.
> Jumbling sentences together as is commonly done by various botscripts
> does not work very well, or rather, it does not work at all.
>
> On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad 
> wrote:
> >
> > Using an abstract language as an basis for translations have been
> > tried before, and is almost as hard as translating between two common
> > languages.
> >
> > There are two really hard problems, it is the implied references and
> > the cultural context. An artificial language can get rid of the
> > implied references, but it tend to create very weird and unnatural
> > expressions. If the cultural context is removed, then it can be
> > extremely hard to put it back in, and without any cultural context it
> > can be hard to explain anything.
> >
> > But yes, you can make an abstract language, but it won't give you any
> > high quality prose.
> >
> > On Mon, Jan 14, 2019 at 8:09 AM Felipe Schenone 
> wrote:
> > >
> > > This is quite an awesome idea. But thinking about it, wouldn't it be
> possible to use structured data in wikidata to generate articles? Can't we
> skip the need of learning an abstract language by using wikidata?
> > >
> > > Also, is there discussion about this idea anywhere in the Wikimedia
> wikis? I haven't found any...
> > >
> > > On Sat, Sep 29, 2018 at 3:44 PM Pine W  wrote:
> > >>
> > >> Forwarding because this (ambitious!) proposal may be of interest to
> people
> > >> on other lists. I'm not endorsing the proposal at this time, but I'm
> > >> curious about it.
> > >>
> > >> Pine
> > >> ( https://meta.wikimedia.org/wiki/User:Pine )
> > >>
> > >>
> > >> -- Forwarded message -
> > >> From: Denny Vrandečić 
> > >> Date: Sat, Sep 29, 2018 at 6:32 PM
> > >> Subject: [Wikimedia-l] Wikipedia in an abstract language
> > >> To: Wikimedia Mailing List 
> > >>
> > >>
> > >> Semantic Web languages allow to express ontologies and knowledge
> bases in a
> > >> way meant to be particularly amenable to the Web. Ontologies
> formalize the
> > >> shared understanding of a domain. But the most expressive and
> 

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-14 Thread John Erling Blad
An additional note; what Wikipedia urgently needs is a way to create
and reuse canned text (aka "templates"), and a way to adapt that text
to data from Wikidata. That is mostly just inflection rules, but in
some cases it involves grammar rules. To create larger pieces of text
is much harder, especially if the text is supposed to be readable.
Jumbling sentences together as is commonly done by various botscripts
does not work very well, or rather, it does not work at all.

On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad  wrote:
>
> Using an abstract language as an basis for translations have been
> tried before, and is almost as hard as translating between two common
> languages.
>
> There are two really hard problems, it is the implied references and
> the cultural context. An artificial language can get rid of the
> implied references, but it tend to create very weird and unnatural
> expressions. If the cultural context is removed, then it can be
> extremely hard to put it back in, and without any cultural context it
> can be hard to explain anything.
>
> But yes, you can make an abstract language, but it won't give you any
> high quality prose.
>
> On Mon, Jan 14, 2019 at 8:09 AM Felipe Schenone  wrote:
> >
> > This is quite an awesome idea. But thinking about it, wouldn't it be 
> > possible to use structured data in wikidata to generate articles? Can't we 
> > skip the need of learning an abstract language by using wikidata?
> >
> > Also, is there discussion about this idea anywhere in the Wikimedia wikis? 
> > I haven't found any...
> >
> > On Sat, Sep 29, 2018 at 3:44 PM Pine W  wrote:
> >>
> >> Forwarding because this (ambitious!) proposal may be of interest to people
> >> on other lists. I'm not endorsing the proposal at this time, but I'm
> >> curious about it.
> >>
> >> Pine
> >> ( https://meta.wikimedia.org/wiki/User:Pine )
> >>
> >>
> >> -- Forwarded message -
> >> From: Denny Vrandečić 
> >> Date: Sat, Sep 29, 2018 at 6:32 PM
> >> Subject: [Wikimedia-l] Wikipedia in an abstract language
> >> To: Wikimedia Mailing List 
> >>
> >>
> >> Semantic Web languages allow to express ontologies and knowledge bases in a
> >> way meant to be particularly amenable to the Web. Ontologies formalize the
> >> shared understanding of a domain. But the most expressive and widespread
> >> languages that we know of are human natural languages, and the largest
> >> knowledge base we have is the wealth of text written in human languages.
> >>
> >> We looks for a path to bridge the gap between knowledge representation
> >> languages such as OWL and human natural languages such as English. We
> >> propose a project to simultaneously expose that gap, allow to collaborate
> >> on closing it, make progress widely visible, and is highly attractive and
> >> valuable in its own right: a Wikipedia written in an abstract language to
> >> be rendered into any natural language on request. This would make current
> >> Wikipedia editors about 100x more productive, and increase the content of
> >> Wikipedia by 10x. For billions of users this will unlock knowledge they
> >> currently do not have access to.
> >>
> >> My first talk on this topic will be on October 10, 2018, 16:45-17:00, at
> >> the Asilomar in Monterey, CA during the Blue Sky track of ISWC. My second,
> >> longer talk on the topic will be at the DL workshop in Tempe, AZ, October
> >> 27-29. Comments are very welcome as I prepare the slides and the talk.
> >>
> >> Link to the paper: http://simia.net/download/abstractwikipedia.pdf
> >>
> >> Cheers,
> >> Denny
> >> ___
> >> Wikimedia-l mailing list, guidelines at:
> >> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> >> https://meta.wikimedia.org/wiki/Wikimedia-l
> >> New messages to: wikimedi...@lists.wikimedia.org
> >> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> >> 
> >> ___
> >> Wikipedia-l mailing list
> >> wikipedi...@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
> >
> > ___
> > Wikidata mailing list
> > wikid...@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikidata

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-14 Thread John Erling Blad
Using an abstract language as an basis for translations have been
tried before, and is almost as hard as translating between two common
languages.

There are two really hard problems, it is the implied references and
the cultural context. An artificial language can get rid of the
implied references, but it tend to create very weird and unnatural
expressions. If the cultural context is removed, then it can be
extremely hard to put it back in, and without any cultural context it
can be hard to explain anything.

But yes, you can make an abstract language, but it won't give you any
high quality prose.

On Mon, Jan 14, 2019 at 8:09 AM Felipe Schenone  wrote:
>
> This is quite an awesome idea. But thinking about it, wouldn't it be possible 
> to use structured data in wikidata to generate articles? Can't we skip the 
> need of learning an abstract language by using wikidata?
>
> Also, is there discussion about this idea anywhere in the Wikimedia wikis? I 
> haven't found any...
>
> On Sat, Sep 29, 2018 at 3:44 PM Pine W  wrote:
>>
>> Forwarding because this (ambitious!) proposal may be of interest to people
>> on other lists. I'm not endorsing the proposal at this time, but I'm
>> curious about it.
>>
>> Pine
>> ( https://meta.wikimedia.org/wiki/User:Pine )
>>
>>
>> -- Forwarded message -
>> From: Denny Vrandečić 
>> Date: Sat, Sep 29, 2018 at 6:32 PM
>> Subject: [Wikimedia-l] Wikipedia in an abstract language
>> To: Wikimedia Mailing List 
>>
>>
>> Semantic Web languages allow to express ontologies and knowledge bases in a
>> way meant to be particularly amenable to the Web. Ontologies formalize the
>> shared understanding of a domain. But the most expressive and widespread
>> languages that we know of are human natural languages, and the largest
>> knowledge base we have is the wealth of text written in human languages.
>>
>> We looks for a path to bridge the gap between knowledge representation
>> languages such as OWL and human natural languages such as English. We
>> propose a project to simultaneously expose that gap, allow to collaborate
>> on closing it, make progress widely visible, and is highly attractive and
>> valuable in its own right: a Wikipedia written in an abstract language to
>> be rendered into any natural language on request. This would make current
>> Wikipedia editors about 100x more productive, and increase the content of
>> Wikipedia by 10x. For billions of users this will unlock knowledge they
>> currently do not have access to.
>>
>> My first talk on this topic will be on October 10, 2018, 16:45-17:00, at
>> the Asilomar in Monterey, CA during the Blue Sky track of ISWC. My second,
>> longer talk on the topic will be at the DL workshop in Tempe, AZ, October
>> 27-29. Comments are very welcome as I prepare the slides and the talk.
>>
>> Link to the paper: http://simia.net/download/abstractwikipedia.pdf
>>
>> Cheers,
>> Denny
>> ___
>> Wikimedia-l mailing list, guidelines at:
>> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
>> https://meta.wikimedia.org/wiki/Wikimedia-l
>> New messages to: wikimedi...@lists.wikimedia.org
>> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
>> 
>> ___
>> Wikipedia-l mailing list
>> wikipedi...@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
>
> ___
> Wikidata mailing list
> wikid...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l