Hi Heather,
Thanks for taking the time articulate your concerns, and in a very clear
and constructive way.

We agree that there can be a dark side to AI, and therefor in response to
your feedback we will be ending development of some planned features, such
as giving the Explanation Engine control over the US nuclear arsenal [1]
and the spacecraft pod bay doors [2]. 😜

In seriousness, though: totally agreed that we need to proceed with
caution. The corpus of scholarly articles is a knowledge resource of
fantastic power, and with that power comes great responsibility (I just
cannot stop referencing movies in this post, it seems).

When someone types "do vaccines cause autism?" into our search box, what we
send back could make the world a much better place, or a much worse one. We
best make DARN SURE it is the latter. And the more we rely on algorithms to
decide what comes back, the more we are taking risks with untried
technology, in a situation where there are real risks to real people.

Mitigating those risks is a key focus for this project. Our main strategy
to do this is to start small and iterate. Or to quote Ian MacKaye,
we'll "make do with what you have, take what you can get" [3] and keep
moving forward from there.

That will take a few forms. First, the Explanation Engine itself is meant
to be a modular suite of technologies and tools, allowing us to cut our
losses if some technologies don't pan out.

In the website's words, we'll be "adding notes to the text that define and
explain difficult words and phrases....And that’s just the start
we're also
working on concept maps, automated plain-language translations (think
automatic Simple Wikipedia), structured abstracts, topic guides, and more."

So the first part will be the annotation of difficult words in the text,
which is just a mash-up of basic named-entity recognition and
Wikipedia/Wikidata definitions. Pretty easy, pretty safe. Another set of
features will be automatically categorizing trials as to whether they are
double-blind RCTs or not, and automatically finding systematic reviews.
These are all pretty easy technically, and pretty unlikely to point people
in the wrong directions. But the start adding value right away, making it
easier for laypeople to engage with the literature.

>From there we'll move on to the harder stuff like the automatic
summarization. Cautiously, and iteratively. We certainly won't be rolling
anything out to everyone right away. It's a two-year grant, and we're
looking at that as two years of continued development, with constant
feedback from users as well as experts in the library and public outreach
worlds. If something doesn't work, we throw it away. Part of the process.

Relatedly, we'll be launching an early beta quite soon (this fall,
probably), to a few thousand early-access users (if you want to be among
these, you can sign up at https://gettheresearch.org). We will no doubt
find plenty of places where we *thought* the AI was giving clear and useful
assistance, that turn out to be full of errors. Then we fix em or ditch em.
Our early-access group will be really important to us, since they allow us
to filter out broken features before they hit the Whole World.

By keeping things modular, working iteratively, and getting lots of
feedback as we go, we're optimistic that we'll mitigate the all-to-common
mistakes of hubristic, techno-utopian thinking--and at the same time we can
harness recent tech advances to help build a more inclusive, just, and
empowering way to access humankind's collected knowledge.
j

[1] https://en.wikipedia.org/wiki/Skynet_(Terminator)
[2] https://en.wikipedia.org/wiki/HAL_9000
[3] https://www.youtube.com/watch?v=Sdocmu6CyFs

On Thu, Jul 12, 2018 at 1:03 PM, Donald Samulack - Editage <
donald.samul...@editage.com> wrote:

> Yes, but you have to start somewhere!
>
>
>
> There is a quote out there (whether accurate or not) that if Henry Ford
> had asked his customers what they wanted, they would have asked for a
> faster horse. Who would ever have thought of a self-driving car, or even a
> flying car
 well, many, actually – and they made it happen!
>
>
>
> My point is that you have no idea what an exercise of this manner will
> spin off as a result of the effort – that is why it is called “research”.
> The goal is a lofty one, but there will be huge wins in scientific language
> AI along the way. Who knows, it may be necessary for multi-year journeys
> for lay-person trips to Mars, if something goes wrong with the spaceship
> along the way (communication delays will be prohibitive to effect any value
> from Earth; AI will be required for local support).
>
>
>
>
>
> Cheers,
>
>
> Don
>
>
>
> -----------------------------
>
>
>
> Donald Samulack, PhD
>
> President, U.S. Operations
>
> Cactus Communications, Inc.
>
> Editage, a division of Cactus Communications
>
>
>
>
>
> *From:* goal-boun...@eprints.org [mailto:goal-boun...@eprints.org] *On
> Behalf Of *Heather Morrison
> *Sent:* Thursday, July 12, 2018 1:49 PM
> *To:* Global Open Access List (Successor of AmSci) <goal@eprints.org>
> *Subject:* [GOAL] Why translating all scholarly knowledge for
> non-specialists using AI is complicated
>
>
>
> On July 10 Jason Priem wrote about the AI-powered systems "that help
> explain and contextualize articles, providing concept maps, automated
> plain-language translations"... that are part of his project's plan to
> develop a scholarly search engine aimed at a nonspecialist audience. The
> full post is available here:
>
> http://mailman.ecs.soton.ac.uk/pipermail/goal/2018-July/004890.html
>
>
>
> We share the goal of making all of the world's knowledge available to
> everyone without restriction, and I agree that reducing the conceptual
> barrier for the reader is a laudable goal. However, I think it is important
> to avoid underestimating the size of this challenge and potential for
> serious problems to arise. Two factors to consider: the current state of
> AI, and the conceptual challenges of assessing the validity of automated
> plain-language translations of scholarly works.
>
>
>
> Current state of AI - a few recent examples of the current status of AI:
>
>
>
> Vincent, J. (2016). Twitter taught Microsoft's AI chatbot to be a racist
> asshole in less than a day. The verge.
>
> https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
>
>
>
> Wong, J. (2018). Amazon working to fix Alexa after users report bursts of
> 'creepy' laughter. The Guardian https://www.theguardian.com/technology/
> 2018/mar/07/amazon-alexa-random-creepy-laughter-company-fixing
>
> Meyer, M. (2018). Google should have thought about Duplex's ethical issues
> before showing it off. Fortune http://fortune.com/
> 2018/05/11/google-duplex-virtual-assistant-ethical-
> issues-ai-machine-learning/
>
>
>
> Quote from Meyer:
>
> As prominent sociologist Zeynep Tufekci put it
> <https://twitter.com/zeynep/status/994233568359575552>: “Google Assistant
> making calls pretending to be human not only without disclosing that it’s a
> bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end
> with the room cheering it
 horrifying. Silicon Valley is ethically lost,
> rudderless and has not learned a thing.”
>
>
>
> These early instances of AI applications involve the automation
> of relatively simple, repetitive tasks. According to Amazon, "Echo and
> other Alexa devices let you instantly connect to Alexa to play music,
> control your smart home, get information, news, weather, and more using
> just your voice". This is voice to text translation software that lets
> users speak to their computers instead of using keystrokes. Google's Duplex
> demonstration is a robot dialing a restaurant to make a dinner reservation.
>
>
>
> Translating scholarly knowledge into simple plain text so that everyone
> can understand it is a lot more complicated, with the degree of complexity
> depending on the area of research. Some research in education or public
> policy might be relatively easy to translate. In other areas, articles are
> written for an expert audience that is assumed to have spent decades
> acquiring a basic knowledge in a discipline. It is not clear to me that it
> is even possible to explain advanced concepts to a non-specialist audience
> without first developing a conceptual progression.
>
>
>
> Assessing the accuracy and appropriateness of a plain-text translation of
> a scholarly work intended for a non-specialist audience requires expert
> understanding of the work and thoughtful understanding of the potential for
> misunderstandings that could arise. For example, I have never studied
> physics. I looked at an automated plain-language translation of a physics
> text I would have no means of assessing whether the translation was
> accurate or not. I do understand enough medical terminology, scientific and
> medical research methods to read medical articles and would have some idea
> if a plain-text translation was accurate. However, I have never worked as a
> health care practitioner or health care translation researcher, so would
> not be qualified to assess the work from the perspective of whether the
> translation could be mis-read by patients (or some patients).
>
>
>
> In summary, Jason and I share the goal of making all of our scholarly
> knowledge accessible to everyone, specialists and non-specialists alike.
> However, in the process of developing tools to accomplish this it is
> important to understand the size and nature of the challenge and the
> potential for serious unforeseen consequences. AI is in very early stages.
> Machines are beginning to learn on their own, but what they are learning is
> not necessarily what we expected or wanted them to learn, and the impact on
> humans has been described using words like 'creepy', 'horrifying', and
> 'unethical'. The task of translating complex scholarly knowledge for a
> non-specialist knowledge and assessing the validity and appropriateness of
> the translations is a huge challenge. If this is not understood and plans
> made to conduct rigorous research on the validity of such translations, the
> result could be widespread dissemination of incorrect translations.
>
>
>
> best,
>
>
>
> Heather Morrison
>
> Associate Professor, School of Information Studies, University of Ottawa
>
> Professeur AgrĂ©gĂ©, École des Sciences de l'Information, UniversitĂ© d'Ottawa
>
> heather.morri...@uottawa.ca
>
> https://uniweb.uottawa.ca/?lang=en#/members/706
>
> _______________________________________________
> GOAL mailing list
> GOAL@eprints.org
> http://mailman.ecs.soton.ac.uk/mailman/listinfo/goal
>
>


-- 
Jason Priem, co-founder
Impactstory <http://impactstory.org/>: We make tools to power the Open
Science revolution
follow at @jasonpriem <http://twitter.com/jasonpriem> and @impactstory
<http://twitter.com/impactstory>
_______________________________________________
GOAL mailing list
GOAL@eprints.org
http://mailman.ecs.soton.ac.uk/mailman/listinfo/goal

Reply via email to