Re: [agi] is anyone interested in explaining AGI?

2021-01-23 Thread Alan Grimes via AGI
Matt Mahoney wrote:
> What problem are you trying to solve with AGI or ASI?

All Problems.

> I can think of two. One is automating human labor to save $90 trillion
> per year. That was my focus. The second is to extend life by building
> robots that look and act like you.

That's the terasem proposal.
It's bullshit.

But here's the website...
https://terasemmovementfoundation.com/


-- 
The vaccine is a LIE. 
#EggCrisis 
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T958bb5810b81761c-M9fb280c599e20fbdf41d284d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] is anyone interested in explaining AGI?

2021-01-23 Thread immortal . discoveries
The ASIs we are going to have very soon will be the new species, there'll be 
more of them than us, they will not only not die but also will help humans not 
die by not only cloning our homeworld larger but also repairing you yourself 
both make you immortal. All Earth will become nanobots by transforming atom 
types into other atom types ex. oxygen into gold, at least to some extent... 
The massive galaxy sized nanobot god will only be on the defense to be immortal 
longest, there will be no utopia etc only militia defense, our joys and food 
and cloning etc are exactly that, we are trying to be immortal. Of course 
humans may cling around still and get some VR utopia, but most VR  will be the 
new species just thinking/ planning/ sim-ing the real world as part of their 
militia defense system...

I work on AGI/ASI because it is so close and will do a better job at stopping 
my death than we could.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T958bb5810b81761c-M8c416c493e6539043ce5bc7f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] is anyone interested in explaining AGI?

2021-01-23 Thread Matt Mahoney
What problem are you trying to solve with AGI or ASI?

I can think of two. One is automating human labor to save $90 trillion per
year. That was my focus. The second is to extend life by building robots
that look and act like you.

Other possibilities are to launch a singularity, or perhaps to replace DNA
based life with something more efficient, or perhaps create a Kardashev
level 2 or 3 civilization. Or perhaps to launch a gray goo attack or
protect against one. Or create a virtual utopia running a copy of your mind.

Or did you have something else in mind?

On Thu, Jan 21, 2021, 12:54 PM  wrote:

> AGI is an "artificial" human brain and body. The reason we want to make
> AGI is not because we want a billion AGIs to work together (we already have
> 1 billion human AGIs, on Earth, us), it's because it is so easy to make AGI
> into ASI immediately afterwards, by giving the AGIs better intelligence,
> more data, sensors, motors, short term memory, speed in thinking/moving,
> etc. Now we get a billion ASIs, that will work together. Not only will they
> individually get more done way faster, but also be able to connect better
> with peer ASIs as a team more than we can.
>
> If you mean creating AGI or ASI or a billion ASIs is as costly as getting
> humans to work together better so may as well forget AGI, I strongly
> disagree, we nearly have made AGI now, and making AGI into ASI is easy too
> (from mainly neural speed up, and intelligence improvement, among many
> other smaller things like higher resolution cameras), and making ASI into 1
> billion ASIs is also easy all we do is clone the same trained brain and
> then differentiate it so the many me selves work on things in parallel the
> original self wanted to do, it won't cost that much storage or compute to
> run them all in parallel we have enough computers on Earth to do them all,
> just look at how small the code and memory and compute (after training) was
> for something like DALL-E or GPT-2, and clearly their training can be made
> much more efficient with proper AGI. These ASIs that do wonders like ex.
> DALL-E and GPT-2 and JUKEBOX, which are nearly here, we are so close to
> ASIs, don't need so many bodies at first. They will run faster sensors
> motor and brains but the brains will be faster not only in their actions in
> their imaginations but also the fact that they can safely think of complex
> or expensive scenes, do anything, change tools, skip time, etc, it's way
> faster to think than to do real experiments, all you need is enough data to
> not need to look outside the box, I already can generate likely true paths
> in my brain without trying any code (I rarely code).
>
> That's why I said your AGI Matt is not AGI but rather a way to get
> *human_AGIs* to *better work together*. That's good but it's not AGI and
> it won't give us ASIs working together either which would be way way so
> faster than us solving Human Death.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T958bb5810b81761c-M2ebf326fe75fe67a082bd86c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Simplified Mixture of Experts Routing Algorithm

2021-01-23 Thread James Bowery
Matt should like this one even though it falls far short of his big think
:

Switch Transformers: Scaling to Trillion Parameter Models with Simple and
Efficient Sparsity 

In deep learning, models typically reuse the same parameters for all
inputs. Mixture of Experts (MoE) defies this and instead selects different
parameters for each incoming example. The result is a sparsely-activated
model -- with outrageous numbers of parameters -- but a constant
computational cost. However, despite several notable successes of MoE,
widespread adoption has been hindered by complexity, communication costs
and training instability -- we address these with the Switch Transformer.
We simplify the MoE routing algorithm and design intuitive improved models
with reduced communication and computational costs. Our proposed training
techniques help wrangle the instabilities and we show large sparse models
may be trained, for the first time, with lower precision (bfloat16)
formats. We design models based off T5-Base and T5-Large to obtain up to 7x
increases in pre-training speed with the same computational resources.
These improvements extend into multilingual settings where we measure gains
over the mT5-Base version across all 101 languages. Finally, we advance the
current scale of language models by pre-training up to trillion parameter
models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over
the T5-XXL model.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T10a03587f1fe2ac4-M8153ec66fa9721d6cdd59f3b
Delivery options: https://agi.topicbox.com/groups/agi/subscription