How many times have I stated in this group that big language modelers
bragging about their parameters counts worse than cottoncandyfluffpuffery?
Reducing the parameter counts by a factor of 1000 and achieving comparable
performance on just about _any_ plausible language benchmark is damning of
such hypercottoncandyfluffpuffery.

On Wed, Oct 7, 2020 at 4:08 AM <[email protected]> wrote:

> James, if theirs is trained on enwik9 and if GPT was too, then we can
> compare. Remember, if both models reached a training level where both
> really need not eat more data, you'd see the smarter one is better no
> matter how much data the other eats. So let's say GPT scores better than
> theirs, it's ok if it has a big model, because the other can't reach its
> accuracy even if eats to get a as big model as GPT's. Big model is only a
> problem if can't run it. That requires optimization before moving on. One
> day when we reach the seemingly best AI, we can then start worrying about
> model size and speed as its the only gain left.
>
> And if I'm wrong, then my first thought on this thread is correct.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T6761a13445e5864b-Mc32deb96ba6f64b000912289>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6761a13445e5864b-M387f0bb3b56686149ff2cce3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to