Thanks for sharing. I only have one question: "What time is it?"

I think I see why AGI remains elusive. At present, it seems all the reasoning 
around AGI relies on a linear model, which relies on the computational power of 
linear instruments, relying on parallel processing (linear processing in bands).

So do all the other methods for "evolving" intelligence with. Some 
practitioners then compound linear models and call it complexity, thinking 
(erroneously) how backward chaining - and even forward chaining  - as professed 
by some on here - would automatically yield the dot AGI results they crave.

The power, allure, and illusion, of programming has always been the ability to 
just create stuff out of thin, binary air. We could call cables and instruments 
a cloud, and it would become a cloud. We imagined it, let it be, we name it, 
and it it became. AI seemingly augments that illusion to delusional levels. So 
then, programmers try to recreate the "world" according to their 
gaming-scripted (brainwashed) visions.

Meanwhile, every day real people get up for another, dreary real day made 
difficult by half-assed technology and either sit at home waiting for a real 
job, or go slog away just to make ends meet with only one dream; to sit down at 
night with a TV dinner, aimlessly scrolling through FB, and zoning out. Well, 
at least many Westerners do.

I think this is how fake AGI would eventually be "created". It would be just 
brought into existence by naming it, formulating a good-enough idea of it, and 
then concluding the existence via a linear machine running a set of linear, 
computer programs. It would be announced to the world. "We've" created AGI!!!! 
It would be a great Marketing exercise!

Meanwhile, back at the ranch, it would generally just be another bot with a 
little more processing power and faster chips than the one before it and some 
smarter logic it operated on. Still, Google and Apple and Amazon and Microsoft 
and IBM and all the others would have us believe it's real AGI, capable of 
innate reasoning and adaptation to life. But, it won't really be, would it?

If you released a trained cockroach into the walls of a King's palace, and it 
eventually became gold dusted from running through various vaults, having 
access to everywhere, mostly unseen, it's still just a bug, isn't it? That's 
what AI agents are today on the Internet, right? And some masters hold to the 
theory that if you left it in the palace long enough it would eventually become 
a King's household pet.  But it won't, will it?

Movies too would sell us the fantasy of including "AGI" as family members, even 
lovers, or enemies of humanity. All these are messy, genetic misfits and 
cyborgs, or enhanced personal gadgets. All these are symbols of very-confused 
human thinking. Either way, it's definitely not AGI as envisaged by Turing et 
al. These are just monsters and legalized, human experimentation under a catchy 
buzzword.

Real AGI would not need humankind to exist, or for its existence. It would 
exist independently and become self sufficient, as a singularity, as human 
beings are capable of. However, governments have convinced us we cannot live 
and survive like that. They made us dependent on them and passed laws and rules 
to keep it that way. And if we don't comply, they'll lock us up and kill us. 
They'll make our lives real difficult. They'll build AI bots and brainwash and 
laser burn us and use drones and drone bots to hunt us down. They'll bug us no 
matter where we went.

Governments - with the help of the military industrial complex - would do that 
these days. Many won't, but some would do it to their own countrymen and to 
others. They would hardly think twice about doing it, especially if it was done 
to others. If you could do it by proxy, for example via drones, even the better!

And we all bought into that narrative. With all of our lives. We bought it 
because we were convinced that if we did not do it first, then it would be done 
like that to us. We did it because we became convinced we were in imminent 
danger. We did it in self defense. But today, we know that some of those 
dangers were imaginary. There were no weapons of mass destruction in Iraq, and 
that's where this pattern started.

What have we learned from that? Two million civilians had to be slaughtered by 
the military industrial complex, based on political lies. Caring, responsible 
men and women believed them, and willingly sacrificed their sons and daughters. 
Within a short 20 years, the 3rd mass killing of millions of civilians have 
just been  (almost) concluded in Syria. There will be more.

It seems that slaughtering does become easier after first blood has been drawn, 
even for a nation. Are you still under any delusion that AGI would NOT be used 
to slaughter more innocent men, women, and children more effectively? Most 
combat gear now contain AI. And, we all know AI (as  wannabe cockroaches) never 
fail, right?

Some nationalists participated under the guise of duty and honor. Others, just 
because it was the easiest way to live. Too many did it because they enjoyed 
it. All of us bought into that narrative. We're not brainswashed really. We 
just like thinking we are so we do not have to take full responsibility to the 
world for our choices, words and deeds.

We've become a; "The computer said so." generation. Distant and unaccountable. 
When did humankind die to make way for technology? Unintelligent living, in the 
sense of living in disharmony with earthkind, has no discriminatory clauses. 
Anyone's equally welcome. Dependent living is easy. Now, you could be 400 
pounds in dead, social weight and still become a TV celebrity. Say whaaaat?

Here's a wild thought for us; If humankind cannot emulate AGI, how could AGI 
ever emulate us?

Thankfully, we still have Turing to save us from such a delusional state of 
achievement. But, for how long? He's being called irrelevant so often already.

While manufacturing relative truths could be a scientific invention, it does 
not mean the science is good.

For AGI, first, we need non-linear computing platforms and instruments. Not 
only quantum computing, but methodology for general, quantum thinking. We need 
to escape nearly all that we are and have become. We need to be intellectually 
reincarnated, perhaps even structurally altered. Not in an augmented sense, but 
in an inherent sense.

Sadly though, the persons who could make the transition by feat of 
intelligence(s) alone, would probably not be able to function well in society 
afterwards. They'd be considered real freaks. That's probably why Elon Musk 
have to take them all the way to Mars. He sees 1 million of 9 billion persons 
eventually leaving earth to go live on Mars.

Are you a 1 in 8.9 billion kind of a person? If not, go make AGI just a 
sideways hobby. Rather spend your short time on earth to enjoy the beach and 
develop relationships with the living. Life's too short to waste on fantasies. 
I think that on the Mars mission, is where AGI would be able to finally shed 
their skins.

How so? A person cannot live in 2 skins at the same time. It's an earthly 
limitation. Those who could are either considered insane, or supersane. 
Humanity tends towards the former classification though. But on Mars you'll 
have to be able to live in 2 skins at once, to have multiple minds in sync.

Limitations, such as these on earth, might be the strongest reasons why - all 
things remaining equal - it might take another 25 years to advance to a basic 
design of real AGI. No matter how hard humanity tried, AGI cannot evolve from 
linear models, not from the way we are now. Categorically, I do not think real 
AGI could ever evolve from AI.

AI is simply not intelligent enough to evolve into a higher plane of existence 
of its own. Not that AI cannot become extremely complex. Off course it can, but 
more in a spaghetti-code sense it would, not in the sense of evolving as 
instruments of effective complexity.

I'd venture to say that this limitation to AGI would be proven correct in time 
- as a scientific fact. Pity we would've wasted so many years and resources to 
come to such a foregone conclusion.

I think the real AI threat to the world is that strangely-empowered individuals 
and corporations do dream up fantasy applications for logic-controlled 
machines, by extension. Little people needing big machines. It's a need to 
compensate for feelings of inferiority - always a bad motivation for building a 
big gun. It's a tool for out-competing with everyone else. It's mostly driven 
by small people who've given up on real people and everyday life. People who 
need more power to feel even less powerful than before.

Fortunately, it's also being driven by responsible persons who are trying to 
deal with future challenges by providing long-term solutions to the world's big 
challenges. As we know though, this is in the greater minority. Most everyone 
employed in AI are just compensating by building bigger and better guns to 
blast "enemies" with. These are delusional acts motivated by fear, hatred, 
greed, and not confidence.

It's the delusion of the big game, isn't it? Marketing hype! Convince everyone 
that life's but a game, then anything goes.

When my son was little, he thought nothing of it to hunt down innocent grannies 
in the streets of 'Grand Theft Auto' and shooting them in their backs. Because 
I could not monitor him all day long, he had an "adult" who secretively allowed 
him to spend endless hours playing those games.

He's an adult now, not yet a grown man though. He thinks nothing of it to run 
in real gangs and wield knives. He's proud of his knife scars and near misses. 
He pumps iron and hates his boss and his job. He's failed in his civic duty to 
the point where he's labelled as 'undesirable'.

Yet, I know him. I know who he is. I know what he had the potential to become. 
He's ripe for harvesting as a combat drone operator for the government, isn't 
he? He could possibly eat a sandwich while hunting down grannies, innocent, or 
not. Easier if they were "baddies". That would make him a "goodie". This 
concerns me deeply. I don't want that for him. What to do?

Seems I failed him as a father. Maybe I should've confiscated all his gaming 
equipment and allowed him to learn how to be a part of society more. Maybe I 
should've spent more time with him to influence him, taken him fishing, just 
had coffee and taught him how to talk things through. Maybe too many things.

Point being, we should be spending more time with the living than machines. We 
should be spending less time trying to make ourselves rich by making others 
super rich. Life's for the living. It's time to spend more time with the 
living. Online life is flatlining. Sons need their fathers, even if they are 
convinced they don't. Likewise, daughters need their mothers. Fathers need 
their sons. Mothers need their daughters.

Hey! Why bring down the party dude! We're just here for the fun stuff!!! OK, so 
if I could, what would I like to teach my clever son today? That these 
"creators" of destructive technologies simply have no idea what it feels like 
when death and mayhem rain down from unseen heights in the form of possessed 
automatons? Would he care, or like so many young adults today, tell me how they 
were victims of some system, or another? As we've seen on this mailing list how 
some younger ones seemingly concluded; when I get my hands on AGI, they will 
pay!!!

Until remote-controlling programmers and operators don't experience what they 
want for their enemies 1st hand, having to live though the horror of drone 
attacks and robot, machine gun posts and laser blindedness and scatter bombs 
and collecting body parts of friends and loved ones, they would remain 
incapable of understanding. Not even Augmented Reality could prepare you for 
such a reality. Why would anyone want that for their sons and daughters? 
Surely, that in itself is insanity?

Developing AGI requires a lot of sanity, and a helluva lot more than that. It 
requires what is referred to as sense. Good thing AGI is eluding us. Humankind 
cannot handle it, nor its truth. That's why we'll have to regress to being 
stupid and go back to the caves to learn all over again. We'll restart with 
sticks and stones. We are 3rd-generation humanity on earth after 2, prior 
extinctions, and we're still just not getting it right, are we? When are we 
going to show evidence of self recursiveness?

The reality we're creating for us, our children, and our grandchildren, is well 
deserved. Yet, our descendants-to-be have done nothing evil in life. Therefore, 
they don't really deserve the world we're leaving behind for them, do they?

It's madness to want to create an environment in which your descendants cannot 
reasonably survive in. If only we knew how far governments have spun out of 
control. If only we realized how many genies have left their bottles and 
escaped into our environment. If only we realized how many superviruses were 
busy thawing out slowly because of climate change, we'd be scared enough to 
immediately start changing how we think and behave as individuals and 
communities.

Did you know, DNA has potential to spontaneously recombine and come to life? 
How about that? Wow! Nope. Super dread!!! Things were buried for very-good 
reasons for tens of thousands of years. And now they're slowly being unburied 
again. They could come alive. Given the climatic conditions on earth, they 
might just. We are in terrible danger, but not from "enemies", from ourselves.

But, because we're pre-occupied, and ignorant, and arrogantly refusing to 
become adequately informed, we simply do not know. Therefore, we do not feel 
afraid enough to change and act. If such ignorance visibly shows on an 
intelligent list such as AGI, can you just imagine how badly its faring "out" 
there?

Yet, we don't change our ways, do we? Not unless we're being forced to. If we 
don't like the look and feel of this reality, then we'd better start changing 
it for the good of enduring intelligence. Generating new, computer-based 
"realities" isn't going to cut it. Still, we could help a lot. AI could help a 
lot. So could AGI, eventually.

But, it seems humankind, and us here, are not afraid yet to focus on repairing 
the world we've broken, are we? Somehow, we do not shoulder the accountability 
and responsibility for our habitat. We'll just blame government for the storms 
and damage. How smart is that?

Many of us would rather get paid to help break it some more. I think, all this 
is why AGI is eluding us. We're just not evolved enough yet to have it. And 
still, we may be needing it here on earth, more than ever before.

Does the world still have the time for such painfully-slow learning? Tick-Tock.

  https://thebulletin.org/doomsday-clock/current-time/
[https://media.thebulletin.org/wp-content/uploads/2019/01/BAS_Twitter_Post_2Minutes.jpg]<https://thebulletin.org/doomsday-clock/current-time/>
Current Time - Bulletin of the Atomic 
Scientists<https://thebulletin.org/doomsday-clock/current-time/>
As the Bulletin’s Science and Security Board prepared for its first set of 
Doomsday Clock discussions this fall, it began referring to the current world 
security situation as a “new abnormal.”This new abnormal is a pernicious and 
dangerous departure from the time when the United States sought a leadership 
role in designing and supporting global agreements that advanced a safer and ...
thebulletin.org


________________________________
From: sean_c4s via AGI <[email protected]>
Sent: Thursday, 21 November 2019 03:15
To: AGI <[email protected]>
Subject: [agi] Absolute basics of artificial neural networks

Basics of neural networks.
https://ai462qqq.blogspot.com/2019/11/artificial-neural-networks.html
To be continued...
If you find it confusing or not clear at least it is a set of hints of things 
to consider about artificial neural networks.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T8836d304ece0e00d-M19c50ae5e789eb1f6bbecaf8>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8836d304ece0e00d-Ma7ff6776c49a2e25027f7da2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to