All is needed
Architecturally wise the layers themselves can be arbitrary but for most
sequence processing people will shout "Transformer!" and call it a day
Most processes do follow a sorta-autoregressive thing and sequence modeling is
a good way to look at the problem, so I get the jump to th
Learning is important question and I do not want to take away from it, but
these can be treated independently
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M683fd2c37cb525d42124bb21
Delivery op
https://ai.stackexchange.com/questions/10623/what-is-self-supervised-learning-in-machine-learning
Self supervised learning is also when you talk to yourself, you squeeze out and
mine gold from what you know, so it's supervised. but not by us lol. Itself!
Unsupervised! Anyway, it's learning, ok?
self supervised learning is another one.
is self supervised learning better than one shot imitation learning?
what about self supervised imitation learning?
is there a such thing?
On Wed, Oct 30, 2019 at 7:04 PM wrote:
> The issue for me is different generalized classes of tasks operate on
> dif
The only people advancing AGI seems more and more to be larger companies like
Open AI and Google. They get more mentions. They are super rich now and famous
and have a huge team. They re-use their architectures across OpenAI5, the Arm,
GPT-2, MuseNet, Image Completer, Hide & Seek. And they give
The issue for me is different generalized classes of tasks operate on different
sets of symbols, and we often use AI that are trained to operate on a single
set of symbols, often the symbols are dependent on the AI as well, as with
GPT-2 token embedding and position embedding, or any Transformer
The AI problem is analogous to the UFO problem - there appears to be
something there that works, but no one has a clue as to even where to start
to figure out how it works.
Steve
On Wed, Oct 30, 2019, 2:58 AM wrote:
> oh! damn I hate misinterpreting things, I need to concentrate harder when
> i
Transformers make cool hive minds, narrow AI obviously
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M563c158dd2da494a07f0aa66
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Me as the worlds top AGI scientist, GPT-2 is not AGI. It is still narrow AI.
AGI is made up of many
narrow AI's to work in the same body. But some of these narrow in the bunch
generate other narrow AI's. In a
completer unsupervised manner.
I do not see these special AI's generating and t
unsupervidsed one-shot imitation-learning inter-agent self-play
Did yous ever think GPT-2 could self-play itself through imitation? Imitation
keeps the topic same but also learns knowledge quickly by passing it down,
which requires communicating with other related agents.
---
I see unsupervised learning is still very primitive, in the world of machine
learning.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mb05f719a17457fbb2fafc0e6
Delivery options: https://
here is link with a definition for one shot learning.
https://paperswithcode.com/task/one-shot-learning
one shot imitation learning.
Our approach is to combine meta-*learning* with *imitation learning* to
enable *one*-*shot imitation learning*. The core idea is that provided *a
single* demonstrati
please explain?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M20982232dbbd1a96d3535b44
Delivery options: https://agi.topicbox.com/groups/agi/subscription
you could look up one shot learning.
it is interesting.
you could look up the definition of one shot imitation learning.
On Wed, Oct 30, 2019 at 8:49 AM wrote:
> Im not sure what that stuffs about - but if you can imitate something, you
> can guess what itll do next.
> *Artificial General Intel
Thats cool, its definitely paralleling mine too.. especially since you can run
physics engine on the gpu. (hawhaw)
Yes, a physics engine cant do it alone, because theres some things you cant
form geometry for, sometimes it too quick for the camera to store, sometimes
its hidden from the camera
I think it is great idea that parallels my work. There are some flaws for
doing it in a AGI
fashion. I am not say what they are.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta6b978def84fbe9f-M3a5da5763
Physics engine optimization. Sounds like a good idea! Because when your
building a motor network, you have to sample the physics for all its possible
mutations, so there is alot to reiterate.
What do you think about it?
--
Artificial General Intelligenc
AI Learns To Compute Game Physics In Microsecond:
https://www.youtube.com/watch?v=atcKO15YVD8
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta6b978def84fbe9f-M374aa13ff326b58d14466756
Delivery options: https:
Im not sure what that stuffs about - but if you can imitate something, you can
guess what itll do next.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M698c3eae44877dfac6491f1e
Delivery options:
I like looking one shot imitation learning videos.
have you looked at this website.
https://bair.berkeley.edu/blog/
On Wed, Oct 30, 2019 at 5:23 AM wrote:
> that is cool Doddy.
>
> 3d computer vision is the way for me too, because it removes the single
> perspective from the detection. The fi
that is cool Doddy.
3d computer vision is the way for me too, because it removes the single
perspective from the detection. The first computer vision was 2d, and it
still works, but 3d is the cheeky alternative that should work just as well
with less stuffing around with neural network mumbo
facebook is doing it too.
here is a link.
https://venturebeat.com/2019/10/29/facebook-highlights-ai-that-converts-2d-objects-into-3d-shapes/
On Tue, Oct 29, 2019 at 2:44 PM doddy wrote:
> you posted the video on youtube today.
>
> On Tue, Oct 29, 2019 at 12:42 PM wrote:
>
>> had that music blas
oh! damn I hate misinterpreting things, I need to concentrate harder when im
reading other ppls posts... god sorry.
Yeh! haha! watch out its a small world!!!
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T10119
Nooo, I just meant that my publicly declared hate is now a thing one can
find online...
On Wed, 30 Oct 2019 at 10:48, wrote:
> Why is that Stefan - ask me more questions and ill help clarify things if
> u want me to.
> Im not sure if these things I say are to be said, or kept secret...
> becaus
Why is that Stefan - ask me more questions and ill help clarify things if u
want me to.
Im not sure if these things I say are to be said, or kept secret... because
people arent supposed to know?
Here is my unit - and its using search, and it seems quite lively.
https://www.youtube.com/watch?v=6H
Sigh. Now this mail's subject is haunting me...
On Wed, 30 Oct 2019 at 09:56, wrote:
> I wouldnt call it a complete ripoff - a search can kick your butt at
> chess, or any other domain for that matter, and it actually can seem
> "alive" if a computer looks all ends and picks the max. The fut
I wouldnt call it a complete ripoff - a search can kick your butt at chess, or
any other domain for that matter, and it actually can seem "alive" if a
computer looks all ends and picks the max. The future of a.i. is searching,
I dont think its even moral to make something that can think like
Yes thats true in a way I.D., but u have to supply a parser and calculator
with the decompressor for it to work that out, and it only works on basic
arithmetic, nothing else, just given that example.
--
Artificial General Intelligence List: AGI
Permalink:
28 matches
Mail list logo