On Fri, Feb 16, 2024, 1:33 AM <[email protected]> wrote:

> https://openai.com/research/video-generation-models-as-world-simulators
>

So many questions. How much training data, how much compute to train, how
much compute to generate a video, how many parameters?

It is estimated (because OpenAI didn't say) that GPT4 has 1.8 trillion
parameters, was trained on 13 billion tokens (most of the public Internet),
cost $100M to train (10^25 ops at 10^17 per dollar) and runs on 128 A100
GPUs (around $2M).
https://the-decoder.com/gpt-4-architecture-datasets-costs-and-more-leaked/
But other sources give different guesses.

Is audio a harder problem than video, or does video have a bigger payoff so
they attacked it first?

It costs $1M to $10M to produce a movie for a mass audience that you would
pay $1 to $10 to watch. If Moore's law drops the price by half every 18
months, then we are still 25 years from customized movie genres replacing
mass entertainment. Should we expect a similar time scale for music?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4ad5d8c386d0e116-Mc9f3c07a5bb0dac06360e952
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to