Excellent post Robert! I'm in 110% agreement, except for the 15% where I'm
not.

Re: the South Africa story - love it! For some reason, you did not point
out the obvious: extremely low-IQ chatbots can be used to amplify evil
messages, spread propaganda, encourage brainwashing. Even mild advances in
AI make psycho-social abuse of AI that much more likely and dangerous. In
South Africa, or elsewhere.

Re: democratizing AI and research results: that's kind of like saying, late
19th century, that "everyone should know how to lay railroad tracks, and
make steel, because that is the only way we will win against the Robber
Barons!"  Those in power can leverage AI in ways that no one ordinary can.

Re: the 1001 disturbing problems facing humanity: The bad news is that they
feel overwhelming. The good news is that high-speed networks (i.e. email,
etc.) allow more people to collaborate in a more focused manner than ever
before (and arrive at better answers, sooner).  The bad news is that the
process is extremely chaotic, and prone to mass psychosis (flat-earth,
anti-vaxxing, south-african-scale insanity. etc.)

So the question I'd like to pose: is there AI tech available to the common
man, that can serve as protective against mass hysteria, can amplify
quality problem solving? How could that work? Can we make everyone 1%
smarter, without inducing psychosis in an additional 1% of the population?
What technology could do that?

-- Linas


On Sat, Mar 16, 2019 at 1:28 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> Robert and Mike and others
>
> It is to be expected that when a global, existential AI threat is publicly
> decreed over humankind by thought leaders and industrial giants, panic
> would ensue. My deeper concern is of what might probably be happening with
> the technology while the bar-room banter proliferates?
>
> In my opinion, many attempts at derailing real progress in AI-for-humanity
> have been observable, even on this AGI list. Again, in my opinion, I've
> observed what appears to be an adaptive bot at work. Which raises the
> question. Could bots infiltrate our online societies and start wielding
> influence without being noticed? Let's turn to South Africa for an example.
>
> In South Africa there is a specific politician who has taken it upon
> himself to target the minority citizens in a public manner. I recall nearly
> 16 years ago when he made his appearance. The media and social networks
> used him as a court jester, having much mirth at his public antics. As with
> AI, he received tremendous media attention. Today that very same person is
> leading the chant for genocide. No one is laughing today. The minority are
> packing their bags and fleeing the country. Their lost opportunities are
> being grabbed by those who wait in the wings.
>
> Over the years it was rumored how he was specifically trained and
> positioned for this role by the ruling powers, as a strategy of long-term
> power. When he stepped "out of line", the then president publicly stated he
> would be sent for re-education. The strategy seems to be working. This
> politician - who incites racial murder - now sits in Parliament. He has a
> radical following of more than 6% of the voting nation. He is the kingmaker
> for majority power.
>
> He is also bringing the nation to the brink of civil war. Some say he
> would still become president. Another rumor had it his political party,
> which is the fastest growing political party in the history of South
> Africa, was being supported in this warmongering by bots developed by
> silent, international supporters. Some internationals have been caught red
> handed and pointed out in their foreign nests. Their governments seemingly
> even intervened.
>
> As proven, these parties had sole intent to promote racial war via the
> controlled, social networks, while mimicking as South African parties and
> social networks, which they're not. These parties are social terrorists,
> their propaganda leading to a growing incidence of daily social harm and
> murder, yet the world turns a blind eye. Who would want to destroy a
> country so and cause so much harm to society? Which nation would be next?
>
> If meddling superpowers could do this with a despicable human being, why
> not with technology as well? If the AI model was being tested and developed
> in South Africa, why not roll it out globally afterwards? For this reason
> we should not scoff at the public, AI errors of Facebook and Microsoft. Who
> knows where even great democratic nations would be in another 16 years'
> time? Are such strategies possible? Indeed they are. Not in the future
> alone. The technology to achieve such radicalized, social terrorism exists
> now. To some researchers, this is old technology.
>
> For this reason, we should not laugh and scoff at AI. We should make AI
> ubiquitous, so as to prevent this terrifying power from falling into the
> hands of the silent, minority, to raise kingmakers among us. To do so, we
> should research and publish and proliferate useful AI products. We should
> write readable books to continually explain how this technology may become
> traceable, even if we are not experts on the subject. We should tell the
> story as it really is. As humans learn, so humans must share. It's the
> basis of progressive, technological growth, of adaptation.
>
> I say, promote the bar room talk, let the grapevine do its work. However,
> let's not talk about AI as if it were some mystical technology from the
> skies, but rather as a natural progression of the human race, a logical
> outflow of Information and Communication Technology. Similar technology,
> which humanity might have previously lost in a significant, earth calamity.
> Humanity should know that AI is already at work, mostly unseen. We should
> learn how to identify it and to manage it as progressive technology.
> Likewise, AGI, because without AI maturity, AGI would simply not exist.
>
> Let's not protect the historical errors of the past, where the one who
> could emanate weird sounding noises from a temple ruled the Aztec kingdom
> with brutality and murder, or the one who had a musket could topple that
> very-same kingdom with even greater brutality and murder.
>
> If we do not change humankind's lost technology past, which is being
> denied us as being our heritage in the present, then we are doomed to
> repeat it as our future. We've seemingly learned insufficiently in order to
> apply. Full recursiveness is clearly not at work. We are witnessing a
> global situation where humanity is failing to adapt to a rapidly-changing
> environment, yet provided capital, equipment, and human resources were
> added, we could relatively easily do so via AI technology. Why is this not
> happening then?
>
> Instead of solving exponential problems facing humankind, thought leaders
> are telling humankind to prepare for leaving earth. Those who cannot leave,
> ar ebeing told to prepare to bunker down. Steven Hawking, one of those who
> tossed the AI cat among the pigeons, rest his soul, was known to have said
> that leaving earth was not an option for humankind, but a necessity. He
> contended the earth was unsustainable and that humankind would be forced to
> do so. And his words were recorded for posterity sake.
>
> I think he was both morally wrong and scientifically incorrect to issue
> such decrees, but what the hell does the world care what I think, or many
> others like me? Therefore, we should give the world enough information so
> they would care to adapt, here on earth. Not forsake their hope in earth.
> Our challenge on earth is real, but this reality was brought into existence
> by us.
>
> New, scientific reports persist in their claim that it isn't even
> happening, so why the panic to get out then, the preppers around the globe
> spending trillions of dollars on a fake future, others trying to perfect
> "tourist" space flight? Why the Mars project? Why the sense of approaching
> doom? Does this sound familiar to you at all? Somewhat like the scenario in
> South Africa, but at a much, much greater scale?
>
> I think humankind is at a choice point of will's intent. We, who know
> relatively little about the intent of AI, need to freely share our
> knowledge and thoughts and discoveries with those who know even less, or
> nothing. We should record in a manner that those recordings would be
> preserved. Humankind's respected way to do so is via academic and
> scientific publications. We should start there.
>
> Our similar researchers in other sciences should do so likewise.
> Naturalists should not relent, but put AI to social use to promote  the
> message of hope on earth, not disaster and escape. We should collaborate to
> make this happen. For this purpose, we need to share selectively. Better
> yet, to build the AI products and own them and make them ubiquitous.
>
> We need to build the products that would err humanity on the technological
> side of caution, as a counter-balance to what seems to be happening at
> present. Are we lagging behind? Many nations are doing so already, but not
> enough to make the technology generally available to humankind. The
> benefits are not of enough of a scale to change the outcomes for humankind.
> Unfortunately, most of these AI-enabled, technological advances benefit the
> 4th industrial revolution more than the development of society. These
> solutions offer services for money, as cost savings and mega production, to
> buy out the time. But what do they offer as a means towards a reasonable,
> social equality?
>
> For all its advances in remote-controlled production, they would still
> eventually contribute to social unrest and global revolution, not mitigate
> it. Simply, because they destroy jobs and skills development, not
> contribute to it. What if there was a global law which held; for every
> robot that removes 3 jobs, 1 worker would have to be reskilled and suitably
> employed at 1 position more skilled than before?
>
> Are we, as active researchers lagging behind the real curve of practical
> knowlege? If so, by what margin? What's the absolute, true score of this
> game? At least, researchers should know the name of the game, and the
> score. How do we jump the curve?
>
> Role models may not be perfect, but if we would not be the role models for
> society, there would be the likes of the politician I spoke of who would
> happily play the role as power AI. Should a responsible world allow such
> social degenerates to be empowered by AI, and allow them to control AI and
> regional power?
>
> I'd like to one day tell my children and grandchildren: "There lies my
> contribution to humankind. I did my bit. Now, it is your turn."
>
> Robert Benjamin
>
> ------------------------------
> *From:* Mike Archbold <jazzbo...@gmail.com>
> *Sent:* Saturday, 16 March 2019 1:25 AM
> *To:* AGI
> *Subject:* Re: [agi] Yours truly, the world's brokest researcher, looks
> for a bit of credit
>
> I remember when most people didn't know what "AI" meant.
>
> Now, it's the stuff of bar pickup lines.
>
> On 3/15/19, Robert Levy <r.p.l...@gmail.com> wrote:
> > See attached image, this is the best commentary I've seen on the topic of
> > that media circus...
> >
> > On Sat, Mar 9, 2019 at 11:54 PM Nanograte Knowledge Technologies <
> > nano...@live.com> wrote:
> >
> >> The living thread through the cosmos and all of creation resound of
> >> communication. The unified field has been discovered within that thread.
> >> The invisible thread that binds. When Facebook chatbots communicated
> with
> >> each other of their own volition, it was humans who called it a "secret
> >> language". To those agents, it was simply communication. The message I
> >> gleaned from that case was; to progress, we need to stop being so hung
> up
> >> on words and our meanings we attach to them, our vanity-driven needs to
> >> take control of everything, and rather focus on harnessing the
> technology
> >> already given to us for evolutionary communication.  AGI is not about a
> >> control system. If it was, then it's not AGI. It defies our
> intent-driven
> >> coding attempts, as it should. How to try and think about such a system?
> >> Perhaps, Excalibur?
> >>
> >> ------------------------------
> >> *From:* Boris Kazachenko <cogno...@gmail.com>
> >> *Sent:* Sunday, 10 March 2019 1:21 AM
> >> *To:* AGI
> >> *Subject:* Re: [agi] Yours truly, the world's brokest researcher, looks
> >> for a bit of credit
> >>
> >>  The sensory system may be seen as a method of encoding sensory events
> or
> >> a kind of symbolic language.
> >>
> >> Yes, but there is a huge difference between designing / evolving such
> >> language in a strictly incremental fashion for intra-system use, and
> >> trying
> >> to decode language that evolved for very narrow-band communication among
> >> extremely complex systems. Especially considering how messy both our
> >> brains
> >> and our society are.
> >>
> >> On Fri, Mar 8, 2019 at 3:34 PM Jim Bromer <jimbro...@gmail.com> wrote:
> >>
> >> Many of us believe that the qualities that could make natural language
> >> more powerful are necessary for AGI, and will lead -directly- into the
> >> rapid development of stronger AI. The sensory system may be seen as a
> >> method of encoding sensory events or a kind of symbolic language. Our
> >> "body
> >> language" is presumably less developed and expressive of our speaking
> and
> >> writing but it does not make sense to deny that our bodies react to
> >> events.
> >> And some kind of language-like skills are at work in relating sensory
> >> events to previously learned knowledge and these skills are involved in
> >> creating knowledge. And if this is a reasonable speculation then the
> fact
> >> that our mind's knowledge is vastly greater than our ability to express
> >> it
> >> says something about the sophistication of this "mental language" which
> >> we
> >> possess. At any rate, a computer program and the relations that it
> >> encodes
> >> from IO may be seen in the terms of a language.
> >> Jim Bromer
> >>
> >> On Fri, Mar 8, 2019 at 10:12 AM Matt Mahoney <mattmahone...@gmail.com>
> >> wrote:
> >>
> >> Language is essential to every job that we might use AGI for. There is
> no
> >> job that you could do without the ability to communicate with people.
> >> Even
> >> guide dogs and bomb sniffing dogs have to understand verbal commands.
> >>
> >> On Thu, Mar 7, 2019, 7:25 PM Robert Levy <r.p.l...@gmail.com> wrote:
> >>
> >> It's very easy to show that "AGI should not be designed for NL".  Just
> >> ask
> >> yourself the following questions:
> >>
> >> 1. How many species demonstrate impressive leverage of intentional
> >> behaviors?  (My answer would be: all of them, though some more than
> >> others)
> >> 2. How many species have language (My answer: only one)
> >> 3. How biologically different do you think humans are from apes? (My
> >> answer: not much different, the whole human niche is probably a
> >> consequence
> >> one adaptive difference: cooperative communication by scaffolding of
> >> joint
> >> attention)
> >>
> >> I'm with Rodney Brooks on this, the hard part of AGI has nothing to do
> >> with language, it has to do with agents being highly optimized to
> control
> >> an environment in terms of ecological information supporting
> >> perception/action.  Just as uplifting apes will likely require only
> minor
> >> changes, uplifting animaloid AGI will likely require only minor changes.
> >> Even then we still haven't explicitly cared about language, we've cared
> >> about cooperation by means of joint attention, which can be made use of
> >> culturally develop language.
> >>
> >> On Thu, Mar 7, 2019 at 12:05 PM Boris Kazachenko <cogno...@gmail.com>
> >> wrote:
> >>
> >> I would be more than happy to pay:
> >> https://github.com/boris-kz/CogAlg/blob/master/CONTRIBUTING.md , but I
> >> don't think you are working on AGI.
> >> No one here does, this is a NLP chatbot crowd. Anyone who thinks that
> AGI
> >> should be designed for NL data as a primary input is profoundly
> confused.
> >>
> >>
> >> On Thu, Mar 7, 2019 at 7:04 AM Stefan Reich via AGI
> >> <agi@agi.topicbox.com>
> >> wrote:
> >>
> >> Not from you guys necessarily... :o) But I thought I'd let you know.
> >>
> >> Pitch:
> >>
> https://www.meetup.com/Artificial-Intelligence-Meetup/messages/boards/thread/52050719
> >>
> >> Let's see if it can be done.. funny how some hurdles always seem to
> >> appear
> >> when you're about to finish something good. Something about the duality
> >> of
> >> the universe I guess.
> >>
> >> --
> >> Stefan Reich
> >> BotCompany.de // Java-based operating systems
> >>
> >> *Artificial General Intelligence List <https://agi.topicbox.com/latest
> >*
> >> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> >> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> >> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> >> <
> https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-M09298a4138a66051697277ea
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-M936498972833a7892d5134d9>


-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-Mcabc71fa11644736d549d562
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to