[agi] QUESTION!!

2021-01-07 Thread immortal . discoveries
Can modern computer vision see 1 image of ex. a cat and then if shown 10 dummy 
images - one of which does have an unseen cat - recognize which image has a cat 
- which is the cat is saw before but blurred, brighter, noisy, rotated, 
stretched, flipped, inverted brightness? This requires great accuracy at 
recognizing something it knows but that is very distorted.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-Me4ed8dac2c7ad731953d944e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Preprint: "The Model-less Neuromimetic Chip and its Normalization of Neuroscience and Artificial Intelligence"

2021-01-07 Thread immortal . discoveries
To make a Accelerator hardware that is general purpose still too you need to 
know what you need to allow. If you allow lots of flexibility for example then 
the tune-er that codes the chip still needs to now tell it what to 
do..Colin you have not said what that is (the AGI neural rules/ mechanisms 
we have in our brains)chips are good but I am sad you have no discoveries 
of how AGI works if I'm correct. Tell me how do we configure said chip now to 
make AGI? You don't know do you..How does AGI work?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2f2a092379e757d2-M5b23fac0dd55c677c51d49d3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] QUESTION!!

2021-01-07 Thread Matt Mahoney
On Thu, Jan 7, 2021, 6:12 AM  wrote:

> Can modern computer vision see 1 image of ex. a cat and then if shown 10
> dummy images - one of which does have an unseen cat - recognize which image
> has a cat - which is the cat is saw before but blurred, brighter, noisy,
> rotated, stretched, flipped, inverted brightness? This requires great
> accuracy at recognizing something it knows but that is very distorted.
>

No. Humans can see because of decades of training, a petabyte through our
optic nerves. Even then we are born knowing how to recognize or learn to
recognize things important to our survival. Things like faces and animals,
vs. barcodes or watermarks.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-Mee88c3e2f8adf95b8c881e39
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] QUESTION!!

2021-01-07 Thread Nanograte Knowledge Technologies
Considering new claims in facial recognition, there should be no reason why AI 
would not recognize a cat as a cat. However, morphing a cat to the point of it 
not resembling a cat, would still prove AI correct, because it would correctly 
point out that the object wasn't recognizable as a cat. The same argument could 
be applied to human beings being deformed to resemble non humans, and so on.


From: Matt Mahoney 
Sent: Thursday, 07 January 2021 21:29
To: AGI 
Subject: Re: [agi] QUESTION!!



On Thu, Jan 7, 2021, 6:12 AM 
mailto:immortal.discover...@gmail.com>> wrote:
Can modern computer vision see 1 image of ex. a cat and then if shown 10 dummy 
images - one of which does have an unseen cat - recognize which image has a cat 
- which is the cat is saw before but blurred, brighter, noisy, rotated, 
stretched, flipped, inverted brightness? This requires great accuracy at 
recognizing something it knows but that is very distorted.

No. Humans can see because of decades of training, a petabyte through our optic 
nerves. Even then we are born knowing how to recognize or learn to recognize 
things important to our survival. Things like faces and animals, vs. barcodes 
or watermarks.
Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-M17a987e907f3b30f4effd714
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-07 Thread James Bowery
On Wed, Jan 6, 2021 at 11:44 PM Ben Goertzel  wrote:

> ...But there is a formal gap between Laws of Form logic and Link-Theory /
> Quantum-Logic, which probably can be filled but I am doing too much
> other stuff to think about it hard at the moment.   That's why I'm
> probing to see if you have some insight or reference that may fill
> this gap, as you're one of the few folks I know who have dug into
> these various areas apparently fairly deeply...
>

If I had it to do over again, I would have hired Manthey rather than
Etter.  See:

https://patents.google.com/patent/US20160148110A1/en

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T425c68f0cea319cd-M9d6188ce1b2365291f0c3d6e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-07 Thread Ben Goertzel
I skimmed but did not yet read that patent application.   However it
doesn't appear to address the  formal gap between Laws of Form logic
and Link-Theory / Quantum-Logic in any direct-ish way, does it?

On Thu, Jan 7, 2021 at 9:08 AM James Bowery  wrote:
>
>
>
> On Wed, Jan 6, 2021 at 11:44 PM Ben Goertzel  wrote:
>>
>> ...But there is a formal gap between Laws of Form logic and Link-Theory /
>> Quantum-Logic, which probably can be filled but I am doing too much
>> other stuff to think about it hard at the moment.   That's why I'm
>> probing to see if you have some insight or reference that may fill
>> this gap, as you're one of the few folks I know who have dug into
>> these various areas apparently fairly deeply...
>
>
> If I had it to do over again, I would have hired Manthey rather than Etter.  
> See:
>
> https://patents.google.com/patent/US20160148110A1/en
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“Words exist because of meaning; once you've got the meaning you can
forget the words.  How can we build an AGI who will forget words so I
can have a word with him?” -- Zhuangzhi++

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T425c68f0cea319cd-M303bf706c5aa758bfe9bf588
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Riddle [Image Recognition]. Suppose you want to find a single straight line in a 1000x1000 pixel image

2021-01-07 Thread stefan.reich.maker.of.eye via AGI
Angle detector [1 microsecond]

https://www.youtube.com/watch?v=3VToiitnzd4_channel=StefanReich 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tde13ab8502f7e504-Mcdfe9bafecfb463d2617b4fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] QUESTION!!

2021-01-07 Thread immortal . discoveries
On Thursday, January 07, 2021, at 2:29 PM, Matt Mahoney wrote:
> No. Humans can see because of decades of training, a petabyte through our 
> optic nerves. Even then we are born knowing how to recognize or learn to 
> recognize things important to our survival. Things like faces and animals, 
> vs. barcodes or watermarks.
Yes more images help but this is 1-shot learning.

Modern approaches rely on Data Augmentation, but for it to be truly helpful 
you'd have to store an enormous amount of variations. And I believe we store 
actually none for variations unless seen/thought of. Only syntax, and maybe 
semantic (maybe BY syntax). Vision nodes have many pixels, there may be ex. 
10,000 vision nodes hence semantics only links top 1,000 but each 10,000 nodes 
each can variate thousands of times. Still thinking.

Other approaches are complex and still not human level.
https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-M744bb9cf9e80bfea5026fc23
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Preprint: "The Model-less Neuromimetic Chip and its Normalization of Neuroscience and Artificial Intelligence"

2021-01-07 Thread immortal . discoveries
I see 3 interesting PDFs above, I'll read them tomorrow.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2f2a092379e757d2-M6e1afca6cd1917f7f317c7da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: QUESTION!!

2021-01-07 Thread stefan.reich.maker.of.eye via AGI
Well, what would the other images show? That seems to me the crucial question.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-Mca3dc27c6e00a78eda7c5f7f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-07 Thread James Bowery
Manthey's geometric algebra U(1) × SU(2) × SU(3) × SO(4) over  ℤ₃ =
{-1,0,1} subsumes quantum logic hence imaginary logic.  Insofar as link
theory* is concerned, consider the absence of a row in a link table to be
equivalent to the 0 in ℤ₃ and the 1 and -1 for the presence of
corresponding cases.

I suggest contacting Manthey directly at the email address given in his
most recent paper:

http://www.rootsofunity.org/wp-content/uploads/2020/08/OutOfTheBox_2020.pdf

You can also obtain the geometric algebra calculator from his younger
colleague, Doug Matzkey at:

http://www.matzkefamily.net/doug/

I'm sure they would be glad to answer your questions in a manner far more
lucid than me.

*I don't think Manthey is very familiar with link theory since he was
already apparently ahead of Etter at the time Etter first presented it at
PhysComp 94.

On Thu, Jan 7, 2021 at 11:14 AM Ben Goertzel  wrote:

> I skimmed but did not yet read that patent application.   However it
> doesn't appear to address the  formal gap between Laws of Form logic
> and Link-Theory / Quantum-Logic in any direct-ish way, does it?
>
> On Thu, Jan 7, 2021 at 9:08 AM James Bowery  wrote:
> >
> >
> >
> > On Wed, Jan 6, 2021 at 11:44 PM Ben Goertzel  wrote:
> >>
> >> ...But there is a formal gap between Laws of Form logic and Link-Theory
> /
> >> Quantum-Logic, which probably can be filled but I am doing too much
> >> other stuff to think about it hard at the moment.   That's why I'm
> >> probing to see if you have some insight or reference that may fill
> >> this gap, as you're one of the few folks I know who have dug into
> >> these various areas apparently fairly deeply...
> >
> >
> > If I had it to do over again, I would have hired Manthey rather than
> Etter.  See:
> >
> > https://patents.google.com/patent/US20160148110A1/en
> > Artificial General Intelligence List / AGI / see discussions +
> participants + delivery options Permalink
> 
> --
> Ben Goertzel, PhD
> http://goertzel.org
> 
> “Words exist because of meaning; once you've got the meaning you can
> forget the words.  How can we build an AGI who will forget words so I
> can have a word with him?” -- Zhuangzhi++

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T425c68f0cea319cd-M5d1737bdf558de8ed1fe61ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: QUESTION!!

2021-01-07 Thread immortal . discoveries
You see a orange-juice colored cat, looks like a human oddly, now you see 10 
new images, one is an elephant, one is a waterfall, one a leaf, one a house, 
one a tooth, tree, and one is that weird cat but has many distortions.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-Mce6a49e44d8849767fb42b38
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: QUESTION!!

2021-01-07 Thread immortal . discoveries
It should know the 2 orange cats are extremely similar.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T751e544d2713dc23-M37cf5c86dd9e11127dbf8877
Delivery options: https://agi.topicbox.com/groups/agi/subscription