Re: [agi] Mindlessness.

2020-06-09 Thread immortal . discoveries
The damn good thing is that even if our economy and hardware twindle, humans 
still retain the seed/ wisdom. We know how to build nividea computers, AI 
algorithms, etc.

The human genome and a AI algorithm are MUCH smaller than the army force is 
creates. A human cell's DNA is so tiny. It grows into the body/  economy. It's 
all there. After millennia of evolution.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mbeba926c6baa317715e31614
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-08 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> The ending of this video is so creepy lol, the music is creepy start
> at 8:05
> https://www.nytimes.com/2020/05/31/us/george-floyd-investigation.html
> it's like zombie Apocalypse

If you want to go deeper into this, you will need to find some
information that has been scrubbed from the web. Find a history book
that is, itself at least 40 years old and look up what the Jakobin (sp?)
Society did during the French revolution and what their platform was.


-- 
The vaccine is a LIE. 

Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M821ffc69738a1554c3c6eb39
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-08 Thread immortal . discoveries
whole video is creepy actually nevermind 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mb2dc70fddc82d085f5af33b0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-08 Thread immortal . discoveries
The ending of this video is so creepy lol, the music is creepy start at 8:05
https://www.nytimes.com/2020/05/31/us/george-floyd-investigation.html
it's like zombie Apocalypse
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M6239f33ab12abe365067dfdb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-05 Thread immortal . discoveries
So the virus had no mind.

And so do these crowds!!!

It's a mindless process! Omg! We can't control these large and small things.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mf457aa1078d32714e4c6e3bd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-05 Thread immortal . discoveries

Everybody has got to spread the word that we need to keep the Economy going, to 
progress to the future technology before we die. We have these computers and 
advanced algorithms just recently we didn't, we are getting very close. We 
can't take vacations. Something very powerful awaits us, so big and fun you 
probably don't realize it. Too many humans so concerned about short term things 
but not the actual long-term survival! Just sad. They all disappear in the end 
after all that attempt. We as a species will get much longer lives - just look 
at the future human AI species, but we ourselves can get in if work hard.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M62d0d91dfdcb0f74dc717490
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-02 Thread immortal . discoveries
Well government is good but when the higher layers are made of crap, like our 
government is, then ya, they do suck. I hate how law works =(
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M464089cb0f422448afcd0a2c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-01 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote:
> There was a segment where Dr. Evil said oh no the pandemic didn't
> work, how bout Race War?

That is precisely correct. =(

-- 
The vaccine is a LIE. 

Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M7f4b293ec81e45accde9db13
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-01 Thread immortal . discoveries
There was a segment where Dr. Evil said oh no the pandemic didn't work, how 
bout Race War?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mc939fc9084f1cec7ea7d6fbe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-01 Thread immortal . discoveries
It's Zero Hour
https://www.youtube.com/watch?v=I6rvFmA9IaY
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mb6a897bb27542a21ef73b7de
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-01 Thread Alan Grimes via AGI
=\

It's a more complex story than that

Some people are philisophical anarchists and point to the abuses of
government power and say the simplest solution is to abolish that power.
My evil roomate is one such person, to some degree at least.

The other type of anarchist is a trained and regularized soldier of a
governmental or quazi-governmental agency such as the CIA. They are
deployed whenever a country becomes too civilized in order to
de-civilize it... In this case, it is this later group  we are dealing
with.

https://twitter.com/qanon_b/status/1267473176742842368

Who shot McKinley anyway?


ps: I'm sorry I mentioned the Boogaloo, I feel somewhat responsible for
causing it even though I was just reporting on a disturbance in the
force I had felt, actually they had been trying to start things for
weeks and it had reached the point where I had to bring it up.

Actually the situation is this, the Enemy is trying to provoke a
snowball effect.


-> some questionable incident involving a black person and a white cop.
-> Stir up violence/criminality in the black community as it is
evidently easy to do.
-> stoke it and withold core governmental police service in the areas so
the violence grows and becomes self-sustaining.  (witness the democratic
assholes not doing their fucking job and putting the goddamned riot
police on the motherfucking streets, this is the ONE TRUE PURPOSE OF
GOVERNMENT and they're failing to do it.)
-> Now the president is in a position where he has to act decisively to
stop this shit. He will be accused of being a tyrant, and every
conceivable thing will be done to sabotage his response to the situation.
-> Finally, after all of this, the "prepers", and "white right wing
kooks" and all the maligned conservatives of all kinds will be put in a
position where they canot possibly NOT act, so out of deadly necessity
they will unlock their gun safes and defend their own lives and
property, On that day, the Boogaloo officially begins

Right now, the best thing you can do is support everything Trump does no
matter what you feel about it, he is doing it in order to protect the
country and preserve YOUR future. Trump is your only hope.


immortal.discover...@gmail.com wrote:
> https://en.wikipedia.org/wiki/Anarchism
>
> Some of these gangs in the U.S. are probably Anarchists. We can see at
> bottom of this link, they are violent and inconstant and Utopians. I
> can see exactly through why. They don't want higher connections /
> larger features. They think they are the final Utopian answer, no need
> for higher layers of a neural network / government. So, they are
> violent. They are the top layer. The inconsistency comes from not
> being cooperative, in a world where it would help - today it's better
> to work together, we know how, a bit. The inconsistency is not adding
> up with all the data / truth, they should work together as agents/
> nodes in the network, but are not, they are not normal humans, they
> are more dangerous than cooperative/ useful. They are worse mutants.
> Good mutants are advanced engineers that no one can understand, but
> still kind of recognized too.

-- 
The vaccine is a LIE. 

Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M26b5cc47b39f547370929d83
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-01 Thread immortal . discoveries
https://en.wikipedia.org/wiki/Anarchism

Some of these gangs in the U.S. are probably Anarchists. We can see at bottom 
of this link, they are violent and inconstant and Utopians. I can see exactly 
through why. They don't want higher connections / larger features. They think 
they are the final Utopian answer, no need for higher layers of a neural 
network / government. So, they are violent. They are the top layer. The 
inconsistency comes from not being cooperative, in a world where it would help 
- today it's better to work together, we know how, a bit. The inconsistency is 
not adding up with all the data / truth, they should work together as agents/ 
nodes in the network, but are not, they are not normal humans, they are more 
dangerous than cooperative/ useful. They are worse mutants. Good mutants are 
advanced engineers that no one can understand, but still kind of recognized too.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M015dd12b0e96985f0d6290a7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-06-01 Thread immortal . discoveries
https://www.youtube.com/watch?v=7SAisWFutbw

https://www.youtube.com/watch?v=QjLEW7iFR5Y
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M51eef86b51d5c110ed3ffc75
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-30 Thread immortal . discoveries
Hoarded of thousands gather around in the US and Canada in zombie like 
nonesense over 2 dead black people. Cop shooting and woman face punching. After 
all our covid work.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M2c0d47fd057dec4b70fe1c5e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-30 Thread immortal . discoveries
Here's how AI can seep in and stay, evolving past us:


It took us a long time to evolve to this point.

Now that we have thinking humans, who now have pretty cool tools and computers 
and communications, we are able to more quickly improve the next human design, 
or perhaps something a bit different looking but does the same things actually, 
such as nanobots.

Evolution moves quite fast near the end, we already see this, we get new 
iphones etc every year. History shows it's an exponential curve.

To stop us, you'd need to take away electricity grids, or computers, or watch 
over human activity to make sure they aren't "computing" things.

However, no one can take away the frontier of evolution; no one can remove us, 
no one can see what you're thinking in your brain. So anyone, especially a 
group of friends, could work mentally and discuss how AGI works. Then, when 
electricity is abundant, can quickly implement the deeply planned idea.

If the artificial humans look just like us, they will blend in possibly. It 
would become unethical to kill them once they look and act and think like 
humans! There you have it!

And being highly skilled, educated, and emotional, they will prove to be more 
than human. Or at least, impress humans and represent them best.

Humans dislike sudden change. Unfortunately, slow transitions are not 
noticeable to us. And eventually, they can kill us, we can overlook our friend 
is using our home more now, or that we are hunching, itching wounds, and eating 
fried foods!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M2efd2be618fbf4adfc1af27a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-27 Thread immortal . discoveries
Big companies and cities get bigger faster. They [pool]. Exponentially. Like 
the end of evolution, today's tech moves very fast. Google, Microsoft, Nividea, 
Facebook etc are so big they add new skyscrapers weekly. They can't die now. 
Imagine you are on a long road and there's a spring enhancer every 80 feet. The 
faster you can run the faster you can pick up a spring enhancement, hence can 
run faster and pick more up. Invest in the big companies to put the money to 
predictable hands that feed you.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mb120e1876f2f94b13c4c86ae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-26 Thread immortal . discoveries
While humans seem to be like any other thing in the universe, able to die, we 
are actually quite amazing, only humans have shown to go and model/ manipulate 
all things of the universe, we talk about transportation, planets, lifespans, 
ourselves, shapes, computers, speakers, motors, cameras, verbs, evolution, etc. 
We modeled the whole universe kind of already. And it should, by the end, kind 
of allow us to succeed somewhat in our goal for longer Survival/ lifespans. 
Because we understand cause=effect and have a single underlying root goal.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M791af4960b49ab47f6c61ec5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-25 Thread immortal . discoveries
"The Internet was literally designed to withstand nuclear war"
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M2f7fd89ea395035aa5becc67
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-24 Thread Matt Mahoney
On Sat, May 23, 2020, 12:28 PM Stanley Nilsen  wrote:

> I've thought about the Godlike computer and decided it's beyond my pay
> grade to figure it out.
>
> There are a couple of differences when you "believe" in a God entity.
> 1. your view of yourself
> It is humbling to think that we created with a purpose and hope.  That
> we may be simple creatures meant to engage this world and find deeper
> meaning - that enriches us.  We should realize that "greatness" in men (and
> women) is far from great in the overall scheme of things.
>

Your purpose in life is to propagate your genes, the same as every other
animal, plant, bacteria, and virus. Evolution endowed us with large brains
that set artificial goals and feel good about achieving them. It's actually
a clever learning algorithm.

2. your relationship with the Godlike  computer.
> Seeing ourselves as simple elements given life, we can be thankful and
> grateful when things turn out well.  And when they aren't so well, we
> realize our view and understanding are not adequate to get the full picture.
>

I don't believe in godlike simulations that answer prayers.

>
> About "free will" -
>Arguments about free will are mostly about words - what is it to be
> free and what is it to have a will - I don't intend to address that - leave
> it to the philosophers - unless it becomes a design issue with AGI.
>

By free will, I mean what it feels like to make decisions. The feelings
train us to believe that our actions are somehow distinguishable from the
output of a deterministic neural network.

But along those lines, what if the human brain is so simple that even a
> child can understand it?  What if the words we have used for hundreds of
> years are adequate to talk about the brain and allow people to "program"
> their brain?  It slightly shifts the responsibility for our life outcomes
> from "chance" to self.
>

If brains were simple, we would have solved AGI by now. We can't program a
robot spider to weave webs. Human brains are a little more complex than
spider brains.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M128994d8455c5897a6dd09f9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-23 Thread Stanley Nilsen
I've thought about the Godlike computer and
  decided it's beyond my pay grade to figure it out.  
  There are a couple of differences when you "believe" in a God
  entity.
  1. your view of yourself
      It is humbling to think that we created with a purpose and
  hope.  That we may be simple creatures meant to engage this world
  and find deeper meaning - that enriches us.  We should realize
  that "greatness" in men (and women) is far from great in the
  overall scheme of things. 
  2. your relationship with the Godlike  computer.
      Seeing ourselves as simple elements given life, we can be
  thankful and grateful when things turn out well.  And when they
  aren't so well, we realize our view and understanding are not
  adequate to get the full picture.
  About "free will" - 
     Arguments about free will are mostly about words - what is it
  to be free and what is it to have a will - I don't intend to
  address that - leave it to the philosophers - unless it becomes a
  design issue with AGI.
  But along those lines, what if the human brain is so simple that
  even a child can understand it?  What if the words we have used
  for hundreds of years are adequate to talk about the brain and
  allow people to "program" their brain?  It slightly shifts the
  responsibility for our life outcomes from "chance" to self.
  As a final thought - I like the saying that our lives are God's
  gift to us, and what we do with them is our gift to Him (or Her - 
  for the benefit of the easily offended.)
  (per google:  "Hans Urs von Balthasar quotes
  Showing 1-30 of 87. “What you are is God's gift to you, what
  you become is your gift to God.”On 5/23/20 6:02 AM, Matt Mahoney wrote:On Thu, May 21, 2020, 5:25
  PM Stanley Nilsen 
  wrote:
  I don't see how belief in the possibility AI is reason to
  reject the 
  "Creator" vision of existence.  Let's not confuse gods with religion. It is
  perfectly plausible that a godlike computer solves a set of
  simple physics equations with 10^120 terms to create the
  universe you observe. Or maybe a smaller computer with a more
  complex program simulates your mind and all your sensory
  experiences. There is no experiment you can perform that could
  prove or disprove either hypothesis. The best you can do is
  philosophise that if Occam's Razor is true then the former is
  more likely, and if not true then you can't know anything. You
  can't test for being in a simulation because no computer can
  model the computer that models it, because only one can
  contain all of the information about the other.But we can experimentally test for belief in
  consciousness, qualia, and free will. The beliefs require no
  new physics beyond ordinary computation with neurons. Since
  that is all we can test for, I am satisfied with that
  explanation.

Artificial General Intelligence List
  / AGI / see
discussions
  +
participants
  +
delivery options
Permalink



Re: [agi] Mindlessness.

2020-05-23 Thread Matt Mahoney
On Thu, May 21, 2020, 5:25 PM Stanley Nilsen  wrote:

>
> I don't see how belief in the possibility AI is reason to reject the
> "Creator" vision of existence.
>

Let's not confuse gods with religion. It is perfectly plausible that a
godlike computer solves a set of simple physics equations with 10^120 terms
to create the universe you observe. Or maybe a smaller computer with a more
complex program simulates your mind and all your sensory experiences. There
is no experiment you can perform that could prove or disprove either
hypothesis. The best you can do is philosophise that if Occam's Razor is
true then the former is more likely, and if not true then you can't know
anything. You can't test for being in a simulation because no computer can
model the computer that models it, because only one can contain all of the
information about the other.

But we can experimentally test for belief in consciousness, qualia, and
free will. The beliefs require no new physics beyond ordinary computation
with neurons. Since that is all we can test for, I am satisfied with that
explanation.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M951718341f7e1035815fbfa1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-22 Thread immortal . discoveries
Religious people use AI and work on AI, it's just a machine to them. They think 
they and only they are not machines, but glowing orbs.

Just the thought of mentioning He or God is absurd, we are a specific branch in 
Evolution and god ain't no man, donkey, or rat. I know 80,000 pieces of data 
that disprove religion. I'm full atheist. I also dispose of the unnecessary 
morals that go with it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M9311d6df0a281f5413ee131d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread Stanley Nilsen




On 5/20/20 10:24 PM, Matt Mahoney wrote:

But anyone who believes that AI is possible must logically reject this 
idea if they accept that all human behavior is computable.




It's probable that human behavior is computable - but it takes an 
immense amount of data - both the present and the past to make the 
computations.  Yes there are "reasons" and causes and effects that cause 
people to do what they do.  If there is a divine cause and effect, it 
follows that our human "computations" will come up short in the 
computations.  If there is a God, he has to be beyond our understanding 
or He is not the Diety we mostly think of when we say God.


I don't see how belief in the possibility AI is reason to reject the 
"Creator" vision of existence.  Artificial intelligence simply means 
that a basically "mechanical" device can be intelligent.  The device 
doesn't need a soul to be intelligent - unless I don't really understand 
intelligence.   I see intelligence is the ability to select a behavior 
for a moment - the degree of intelligence is about the "best-ness" of 
that choice, and the range of choices possible for the unit.


Stan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M69c9b025b2b764249483febe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread immortal . discoveries
And the feeling of them is your processing and response to contexts. No one, no 
where, can peek in the noggin or feel anything, it's just a machine that's a 
good liar:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M77eb179ea1a843ef159f40eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread immortal . discoveries
My point is a machine can say so, or not, but there's no way to know if it does 
feel the senses. It says so for a reason, but not because if feels them ! 
That'll never be why the machine says it feels things.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M479bbc2366ab07c9f475ebad
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread WriterOfMinds
> :)Anything you say that you do feel stuff, is the machine. And you can make 
> that:)

*Saying* that you feel things is behavior. So I agree, and this doesn't 
contradict anything in my post. Actually feeling the things, whether you say so 
or not, is experience, and that's the part that would require APC.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M066fc4f97da83ef06d79efe3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread James Bowery
On Wed, May 20, 2020 at 11:25 PM Matt Mahoney 
wrote:

> I mostly post to Facebook publicly rather than friends only, so my life
> story is more accessible to any future robot builders.
>

With a bit different motive, I've been doing this since I swore off
anonymous posts circa 1979 when at CDC Arden HIlls Operations, I was put in
charge of the PLATO network's user identity system for its Notes network
(precursor to Usenet).  My motive was, and remains to this day, creating a
heuristic vector in The AGI to help guide it toward truth.  I figured "The"
AGI would emerge, first, as an amalgam of all digital knowledge -- so my
identity would be only one of a gazillion identities -- many latent -- in
the data. My specific personality would be rather irrelevant -- a few
residual parameters to instantiate "me", and would not be of much interest
to anyone.  After I got into neural networks in the late 80s and started
trying to get them to generalize on my hardware convolution (DataCube video
FIR) to do good multisource datafusion image segmentation, I realized how
important model parsimony was and focused on various pruning algorithms.
That's how I ended up serving truth by my emphasis on lossless compression
of a wide range of longitudinal data.

There was a time, however, during the 80s that I fell in love with a woman
so deeply that I wanted to create an AGI to preserve her personality
forever, and that was part of my motive for looking into early versions of
imitation learning -- which, of course, was a forlorn exercise.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M925df9e2ba03d4342ea4d5d0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread immortal . discoveries
:)Anything you say that you do feel stuff, is the machine. And you can make 
that:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mc389dbbe88b8d0d70b0a4a11
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread WriterOfMinds
What if all of human behavior is computable, but not all of human experience is 
computable? 

When Searle (and that apparent disciple of his who sometimes visits this list 
... I don't remember the name now) start ranting that you couldn't possibly 
make an AGI without "brain physics," I think they're quite mistaken. I don't 
have any problem identifying intelligence with information processing or 
computation, in which case any computing device can be intelligent, regardless 
of its physical architecture. And intelligence alone is adequate to drive 
human-like behavior.

However, if your goal is not AGI but APC (Artificial Phenomenal Consciousness), 
Searle and friends might have a point. Consciousness is not a subset of 
thinking; it is rather, to use one of Matt's own definitions, "what it *feels 
like* to think." It's quite easy for me to imagine a thinking machine that 
cannot feel itself thinking, or a positive reinforcement loop that doesn't 
experience its own convergence.  *Maybe* consciousness inheres in certain kinds 
of computation, but I find it just as possible that it's generated by something 
particular about our wetware. Since consciousness isn't directly detectable in 
any entity but oneself, and it's pretty hard to make dramatic changes to one's 
own brain *in situ*, we may never know for certain which layer(s) of the brain 
produce those feelings and experiences: the computational "algorithms," or the 
underlying electrochemistry.

Consciousness and qualia are not religious concepts, and need not have anything 
to do with souls. Those of us who are religious may tell religious stories 
about consciousness, and many other obviously real phenomena ... but being an 
atheist doesn't obligate anyone to disbelieve in or to minimize their own 
consciousness. Especially since it's an empirical fact. Feelings can mislead, 
they can *produce* illusions, but the feelings themselves are never illusory.

Personally, I am here for AGI, not APC, and I regard the two as distinct.  
Also, the possibility of uploading or reconstructing my mind in silicon doesn't 
form any part of the reason for my interest in AGI, so I don't need to bother 
about the question of whether an upload would still be "me."

All of that to say: my opinions about where consciousness comes from, whether 
souls exist, and whether APC is possible are *irrelevant* to my ability to 
regard AGI as possible or human behavior as computable.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M89004bc9135e7c7c542d6abc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-21 Thread immortal . discoveries
One more thing. Since we are just machines and there's nothing to keep alive in 
the first place that's testable, and only say we feel immortal and safe, and is 
impossible to define what makes you dead, let's focus again on what I want. I 
can lose parts of my brain or body, and my brain is module and doesn't have 
just 1 of something, right? So what I want is the duplication of data, because 
it will help me maintain my overall global structure, most (most optimal). 
Basically, once the human machine stops doing most its stuff, you may die then. 
Of course we could replace you approx. perfect, right back on your seat, 
technically! So basically the stubborn "illusion of lie" can be attacked back 
and you can be recreated from the dead too. Take that! So basically if we "do 
it our way", we feel good about our success...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mf97712e8e6e0d988a2c0a1d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-20 Thread immortal . discoveries
Awesome. True. I too post all my thoughts and desires of what I want to make 
occur in futures. I store  everything I write to my long note file.

I find it fascinating I am still me if I lose all my limbs, some chest or brain 
cells/ memories, learn new beliefs, change blood,  open your skull and sew it 
up, bend over, blink. Cus change is death when you critically examine the 
physics of your system. Like you know, when you study something, you get deeper 
each day, until you felt open enough to see what really is gong on. At first it 
seemed magical. Later, you get more precise. Indeed, if one particle of your 
body (what is the segmented wall of your body? There is none, you are part of 
your house) moves, you die. Yet we live longer than rocks because we predict or 
respond by reflexes to the environment.

I seem to be in my brain. I stay alive if I lose an arm. Really there is no me, 
I'm just particles. But the machine says it stayed alive. That's what our 
communication is about so I'll shut up now and focus on our desires. I want 
immortality. It's ok if I lose my limbs. Because I say it's ok. My skull. Some 
my memories. I can lose some. In fact I can lose most of my memories. I only 
see a few things at any given moment anyway. So what can't I lose? Or, what 
don't I want to lose? My core brain architecture that stores/ prunes memories 
and retains energy, and reward neurotransmitters. If I lose too much I become 
less able to maintain my form. It's ok if I can't prune memories anymore. Omg. 
Where does it end?

Me, my great survival ability, seems to be seeing images an hearing sounds. 
It's me. Hi! I manipulate data. ... Or do I see data as well? What's that do. 
So, it's just the machine saying all this, the most important thing 
it wants is to maintain its form by duplicating memories, DNA, and growing in 
size. And laying down mechanisms, for example the shafts of a watch or CPU 
don't do much but collectively they all turn and emerge a result, hence 
intelligence can be stored in the environment and even shut down for a while 
like at night you turn of your PC.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M9027dc641066b4838eeb48c9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-20 Thread Matt Mahoney
I mostly post to Facebook publicly rather than friends only, so my life
story is more accessible to any future robot builders. This is limited
because your life experiences include interactions with others, who might
not want them public. I also use Google photos and timeline, and I have a
somewhat outdated website.

Human long term memory is 10^9 bits, but a digital bio can be larger by
remembering things you forgot. But uploading doesn't require restoring all
your memories. You only need enough to convince everyone who knew you in
your carbon based life that the robot is still you. In 100 years all those
people will be dead or uploaded, so it wouldn't even matter if all those
memories were made up.

I am not religious. I believe that your senses of consciousness, qualia,
and free will come from positive reinforcement of your thoughts,
perceptions, and actions, respectively. This motivates you to not die,
which in turn gives you more offspring. Other people use religion to cope
with their evolved fear of death by promoting, with no evidence, the theory
that they have a soul that goes to heaven. But anyone who believes that AI
is possible must logically reject this idea if they accept that all human
behavior is computable.

On Wed, May 20, 2020, 10:26 AM Stanley Nilsen  wrote:

> Watched a couple episodes of "Upload" on Amazon Prime.  It helps to give
> some perspective on the difference between "continue to have experiences"
> and living life.
>
> I personally look at eternal life as only being worth living because it is
> a rich experience that was created by God, a generous and infinitely wise
> God.  The "upload" life doesn't appeal to me - digital fidelity might be
> pretty good, but not where I want to spend an eternity.  Don't mistake this
> response for an argument...
> Stan
>
>
> On 5/20/20 7:53 AM, immortal.discover...@gmail.com wrote:
>
> Where are you posting to most if I may ask?
>
> There may be no immortality right but the humans seem to want to stay
> alive while become immortal. Well, most humans show they'd prefer it even
> if don't say it. Note most religious want immortality, and most humans are
> religious.
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mc270b8cfe177085dbec222b5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-20 Thread Stanley Nilsen
Watched a couple episodes of "Upload" on
  Amazon Prime.  It helps to give some perspective on the difference
  between "continue to have experiences" and living life.  
  I personally look at eternal life as only being worth living
  because it is a rich experience that was created by God, a
  generous and infinitely wise God.  The "upload" life doesn't
  appeal to me - digital fidelity might be pretty good, but not
  where I want to spend an eternity.  Don't mistake this response
  for an argument... 
  StanOn 5/20/20 7:53 AM,
  immortal.discover...@gmail.com wrote:Where are you posting to most if I may ask?There may be no immortality right but the humans seem to want
to stay alive while become immortal. Well, most humans show
they'd prefer it even if don't say it. Note most religious want
immortality, and most humans are religious.

Artificial General Intelligence List
  / AGI / see
discussions
  +
participants
  +
delivery options
Permalink



Re: [agi] Mindlessness.

2020-05-20 Thread immortal . discoveries
Where are you posting to most if I may ask?

There may be no immortality right but the humans seem to want to stay alive 
while become immortal. Well, most humans show they'd prefer it even if don't 
say it. Note most religious want immortality, and most humans are religious.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M7996fe4cd496929b5fcc5bc8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-20 Thread Matt Mahoney
I post as much information about myself publicly as possible so that in 100
years they can build a robot that looks and acts like me. Is that what you
mean by immortality?

In the meantime I exercise and don't smoke or drink.

On Wed, May 20, 2020, 5:42 AM  wrote:

> Anyone here ever get so excited you were or felt like you were on your
> tippy toes screaming running smiling huge laughing and beaming happy
> inside? Like you would run 100 miles? Why do you want to die then? You can
> get more of that.
>
> So get motivated to become immortal! Think Long-Term. You really want it
> yes u do. Most like staying alive,
>
> I take it you guys aren't motivated enough or think I'm worthless to waste
> time on. No group action here.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M58c5e71bae928959eace68ae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-20 Thread immortal . discoveries
Anyone here ever get so excited you were or felt like you were on your tippy 
toes screaming running smiling huge laughing and beaming happy inside? Like you 
would run 100 miles? Why do you want to die then? You can get more of that.

So get motivated to become immortal! Think Long-Term. You really want it yes u 
do. Most like staying alive,

I take it you guys aren't motivated enough or think I'm worthless to waste time 
on. No group action here.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M0fc70ddef5caa8ed6749aca7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-19 Thread immortal . discoveries
So now they are adding mannequins to restaurants.

We should have known something was wrong when they started replacing hamburgers 
with veggie burgers.

Perhaps, it's more "efficient". Or, "artificial".
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-M93b00ba38b55c3d90083a048
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mindlessness.

2020-05-16 Thread Matt Mahoney
I'm pretty sure I will get the covid-19 vaccine if they come up with one. I
mean unless I have a positive antibody test and don't need one. I don't
know why 25% of Americans are already opposed in advance to a non-existent
vaccine. Wait until we have one and the studies on safety and effectiveness
are done. I'm betting that a shot hurts less than the disease.

Not that I'm overly concerned. I figure that 25-50% of the world population
will catch it eventually, and 0.4-0.7% of them will die. That's 10-15
million over the next couple of years. The fatality rate has the same age
and comorbidity distribution as deaths from all causes. Your risk of dying
from it if you get infected is the same as dying in the next 4-6 months if
you don't. Either risk doubles every 8-9 years of age.

The average fatality had a life expectancy of 12 years before infection.
The overall effect on life expectancy is 12 years x 0.25-0.5 x .004-.007 =
about a week. Global life expectancy is 73.2 years and increasing at the
rate of 0.2 years per year. The effect of the pandemic is to delay global
progress in human health by 5 weeks.

The economic and human costs from the lockdown are harder to estimate. Life
expectancy is strongly correlated with income. Progress in global health is
due mainly to the 2-3% increase in per capita GDP over the last couple of
centuries. A 25% cut to the economy would be a 10 year setback, cutting
life expectancy by 2 years if the effects were permanent. But some fraction
that is hard to forecast will recover quickly. Each year of education is
associated with 1.5 years of life expectancy. Closing schools now kills
people 50-60 years later.

On Wed, May 13, 2020, 8:57 PM Alan Grimes via AGI 
wrote:

> U know something, I forgot why I just built myself a 3960x/96gb ram...
> 
> I mean with all the new I/O capacity I could slot some pretty mean
> accelerator cards, with
> https://gcc.gnu.org/wiki/Offloading It might not even be that hard to
> program. I forgot why I bought the machine tho, kinda been mindlessly
> completing a plan I came up with about a year ago. The machine is still
> called Tortoise. =P
> 
> But then what exactly is qualitative thingy between mindlessness and
> mindfullness. I mean in the mindless category, we have little kiddies
> younger than about four years old. Or demented geezers, or anyone who
> has voted for any Democrat since Obamacare was passed, or anyone who
> believes that there will be a vaccine...
> 
> Actually there will be something that they will call a vaccine but if
> you really knew what was in it, you wouldn't want it. =|   Dude, listen,
> I'm telling you. Unless someone litterally has a gun pointed at you,
> don't take the GD shot. If that actually happens to you, think very hard
> about why there was a gun pointed at you.
> 
> It may soon be too late to work on AGI at all. =
> 
> Anyway, I don't really have much to say rn, just felt like making a post...
> 
> --
> The vaccine is a LIE.
> 
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc32c76a0c85e2ca9-Mc8643eb4dcb2b759f8aeefed
Delivery options: https://agi.topicbox.com/groups/agi/subscription