[agi] Re: The Dawn of AI (Machine Learning Tribes | Deep Learning | What Is Machine Learning)

2019-08-29 Thread rouncer81
cool vid!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tecb9c0c21d65fcb2-M34d92fe3eb112d0556f8c64c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] You can help train desktop image segmentation

2019-08-29 Thread rouncer81
Yep.  labels first,  actual understanding later on.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-Mcb4fa9e98e050133e8e97495
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: I am releasing all of my AGI research soon

2019-08-29 Thread immortal . discoveries
I know some of my generated knowledge is on the edge but that extra context 
helps me make deeper farther answers and am able to verify later which is truly 
correct. Even if some is wrong is doesn't affect the main pile.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2c56499bd62ee8a-Mfdc263ea0f7b7ebe6a55390d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] You can help train desktop image segmentation

2019-08-29 Thread immortal . discoveries
I agree rouncer81, the visual cortex classifies objects like words first, then 
it will see them next to each other ex. car>road. This is the higher "sentence" 
temporal network. You don't recognize them as a joined object, but rather parts 
next to parts that make up a part next to another part repeat.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-M62200c06aaf75ff2058e92dd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread immortal . discoveries
@Matt We think the same
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mca07bd6601dd56a5ab108f57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] You can help train desktop image segmentation

2019-08-29 Thread rouncer81
Matt Mahoney I have an argument against that,  computer vision ends before 
symbolic relations start.

Your saying that your eye invents jokes with what it sees,  I say no,  the 
vision just classifies the visible aspect alone.
The rest of the derivation of the eye is the rest of the brain.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-M6d5c0fd241ff61dd9717585a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] You can help train desktop image segmentation

2019-08-29 Thread Matt Mahoney
I doubt segmentation will help with image recognition. You lose context.
You recognize people not just by their faces but by when and where you see
them, who they are with, and what they say. It is easier to recognize a car
on a road than a car or a road on a white background.

We tried word segmentation in speech recognition and parsing in sentence
recognition. It doesn't work very well.

On Thu, Aug 29, 2019, 8:39 AM  wrote:

> if you have a 3d camera, segmentation is even easier. its not even
> really machine learning,  its just filters.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-M56375d2d7d10ab11a9a4b53e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread Matt Mahoney
On Thu, Aug 29, 2019, 7:39 AM  wrote:

> On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
>
> Like I said when I first posted on this thread, phenomenal consciousness
> is neither necessary nor sufficient for an intelligent system.
>
>
> This is the premise that you are misguided by. Who is building the
> intelligent systems? Grunts that so happen to have phenomenon
> consciousness, not the opposite.
>

Phenomenal consciousness is what thinking feels like. This feeling evolved
because it motivates you to not die and let those feelings stop. It doesn't
require any new physics. It can be explained entirely by neural
computation. We used to call it a soul, but those of us who understand
computers know better.

Likewise, qualia is what perception feels like, and free will is what
(deterministic) action feels like. These feelings also evolved so that you
would fear dying.

I realize it is disturbing to conclude that your mind is just a
computation. But if AGI is possible, then it has to be. There is no
objective evidence other than your own feelings that phenomenal
consciousness, qualia, or free will is anything else. And feelings are just
neural signals that modify your behavior.

Remember that the first objective of AGI is to reproduce human capabilities
needed to automate work. One of those capabilities is modeling human
behavior so that AGI can communicate more effectively with its masters.
Phenomenal consciousness, qualia, and free will don't affect human
behavior, but our opinions about them do have observable effects that need
to be modeled. An AGI should not have emotions, but should understand how
they work in humans in order to accurately predict behavior.

The second objective of AGI is to extend life. Brains wear out and will
eventually need to be replaced with functionally equivalent devices, just
like all our other organs. The simplest way to do this is to develop a
model of your mind through years of observation and program a robot that
looks like you to carry out the model's predictions of your actions in real
time. As far as anyone can tell, the robot is you, but in a substrate where
your mind can be backed up to the internet.

FAQ:

Q. If I shoot myself, will I wake up as a robot?
A. Yes, because you will be programmed to believe that.

Q. Won't I just be pretending to have feelings?
A. No, you will have no memory of pretending to act out feelings you don't
have, so the feelings will seem real. That is how your brain works now.

Q. Won't I lose free will?
A. No. Free will is an illusion. Your model, like your brain, will still be
programmed to express this illusion while behaving deterministically.

Q. Won't I lose qualia?
A. No. Qualia is an illusion. You will still respond to input the same way
and you will still express a sensation of qualia.

Q. Won't I be a philosophical zombie?
A. No. There is no such thing as a zombie. You will still be able to
imagine such a thing and honestly claim not to be one.

>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mb8f87ae225ca8091e99a22ee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
Clarified:

AGI={I,C,M,PSI}={I,UCP+OR,M,BB}; BB=Black Box

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M3849c56767c291ea6a534cf9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
On Thursday, August 29, 2019, at 6:32 AM, Nanograte Knowledge Technologies 
wrote:
> Qualia are communicable.
> As such, I propose a new research methodology, which pertains to one-off 
> valid and reliable experimentation when dealing with the "unseen". The 
> "public" and repeat" tests for vetting it as science could be replaced by a 
> suitably-representative body of reviewing
 scientists who are accredited in the limitations of subjective, scientific 
observation.

Originally, I did not like the word "qualia" but it's actually quite good. When 
Chalmers or whoever named it put his or her stake on that location in the 
language on that combination of letters it was a good choice.

Part of the issue here is that engineers particularly software cannot wait for 
science in many cases. It is allowed to break physics or invent new ones in a 
virtual world. And engineers need words to put into code. Also, there are many 
symbol issues in contemporary language that have not been addressed generally. 
So two conscious entities need better communications channels to convey 
structure more efficiently and this is easier to do among software agents 
verses human by expanding the symbol complexity and bandwidth. In a perfect 
world full qualia would be instantly transmittable. But this is facilitated 
contemporarily by transmitting multimedia verses just natural language thus the 
 addition of mechanisms like MMS, video conferencing, realtime document 
sharing, etc..

Some researchers say qualia cannot be transmitted. I would change that to say 
full qualia are not transmittable yet.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M75616ccb2a402d5bdda20964
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: I am releasing all of my AGI research soon

2019-08-29 Thread rouncer81
Dont forget, we need DNA samples as well to go with your amazing theories. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2c56499bd62ee8a-Ma0c503fa5aa8a8153756dd15
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] You can help train desktop image segmentation

2019-08-29 Thread rouncer81
Good to here u still sound like your on top of things.

I think theres no need for bad confidence,  theres going to be a simple 
solution for this singularity business.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-M4820fde7dd51af5fadb5e5d5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
> Like I said when I first posted on this thread, phenomenal consciousness is 
> neither necessary nor sufficient for an intelligent system.

This is the premise that you are misguided by. Who is building the intelligent 
systems? Grunts that so happen to have phenomenon consciousness, not the 
opposite.

Well I was thinking of calling all this Gloobledeglockedicnicty or individually 
use 15 other terms every time it's mentioned. But my qualia on it better fit 
into the term "consciousness" and other grunts can relate better.  (Well some 
of them :) )

...

I also want to build an artificial heart. Oh nnooo can't call it a heart. It 
doesn't feel love. Note IMO the heart is an integral part of human intelligence.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mce6ed6677364c02685c2d5cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread Nanograte Knowledge Technologies
"Qualia are personal and incommunicable *by definition,*..."

I tend to disagree with the  assertion how qualia are "incommunicable". Shall 
we we revisit the definition for absolute proof?

Qualia are communicable. I have proven that using a scientific method. I'm 
referring to qualia here in the context of "tacit knowledge". No matter if the 
subject does not know what it knows, or even that it knows. If explicit, 
verifiable evidence of subjective experience could be expressed in a valid and 
reliable manner, as objective fact in context of the holistic experience, it 
should pass as science.

However, the problem to science is; once subjectivity has been made objective, 
how could it be returned to a pure state of subjectivity for the experiment to 
be reliably replicated by others?  That thought encapsulates many of the 
ambiguous problems bio-information science are seemingly struggling with, e.g., 
NP-Hard, ambiguity as well as quantum-spin observations.

As such, I propose a new research methodology, which pertains to one-off valid 
and reliable experimentation when dealing with the "unseen". The "public" and 
repeat" tests for vetting it as science could be replaced by a 
suitably-representative body of reviewing scientists who are accredited in the 
limitations of subjective, scientific observation.



From: WriterOfMinds 
Sent: Thursday, 29 August 2019 00:49
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

"You don’t know my qualia on red ... We may never know that your green is my 
red."
Great, seems like we've reached agreement on something.
When we communicate with words like "red," we're really communicating about the 
frequency of light. I would argue that we are not communicating our qualia to 
each other. If we could communicate qualia, we would not have this issue of 
being unable to know whether your green is my red. Qualia are personal and 
incommunicable *by definition,* and it's good to have that specific word and 
not pollute it with broader meanings.

In the mouse example, I was assuming that I had fully modeled the 
electro-mechanical phenomena in *this specific* mouse. I still don't think that 
would give me its qualia.

I would be happy to refer to a machine with an incommunicable first-person 
subjective experience stream as "conscious." But you've admitted that you're 
not trying to talk about incommunicable first-person subjective experiences, 
you're trying to talk about communication. I'm not concerned with whether the 
"consciousness" is mechanical or biological, natural or artificial; I'm 
concerned with whether it's actually "consciousness."
Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M6c0065d6583e018c990255af
Delivery options: https://agi.topicbox.com/groups/agi/subscription