Pretty sure that's right. The recent innovations have been low cost
incremental updates to the model. Language models for compression have been
doing this for years. And the most successful models are open source. The
leader on the large text benchmark, nncp, is a transformer model with 199M
parameters running on 10K Cuda cores with 24 GB memory trained on 1 GB of
text for 2.5 days. But that was almost 2 years ago.
http://mattmahoney.net/dc/text.html#1085

Google's advantage is access to huge training sets and massive computing
power. Human level language models should in theory be trainable on 1 GB of
text because that's all we can process in a lifetime. But for AI you really
want the knowledge of billions of people. Home grown projects won't be able
to do that. But a human level language model that can do internet searches
should be almost as useful.

On Sat, May 6, 2023, 8:22 AM <[email protected]> wrote:

>
> https://www.semianalysis.com/p/google-we-have-no-moat-and-neither?utm_source=tldrai
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T592788a3fedc3e71-Mc8d45142f5b535632061d808>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T592788a3fedc3e71-M54e24c70872bae2947546685
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to