‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, January 4, 2022 8:50 PM, k <[email protected]> wrote:

> Automated Image Generation
> ...
> All of sudden, Russia comes in and releases a public model that at a
> biased glance looks like somebody just threw a goldmine at it. The
> encouraged way to use it is to visit a site in russian with javascript
> and captchas: https://huggingface.co/sberbank-ai/rudalle-Malevich


"""
Training the ruDALL-E neural networks on the Christofari cluster has become the 
largest calculation task in Russia:

    - ruDALL-E Kandinsky (XXL) was trained for 37 days on the 512 GPU TESLA 
V100, and then also for 11 more days on the 128 GPU TESLA V100, for a total of 
20,352 GPU-days;
    - ruDALL-E Malevich (XL) was trained for 8 days on the 128 GPU TESLA V100, 
and then also for 15 more days on the 192 GPU TESLA V100, for a total of 3,904 
GPU-days.

Accordingly, training for both models totalled 24,256 GPU-days.
"""


you can see why complex models are the domain of big business, government, and 
research institutions... lots of compute required!

best regards,

Reply via email to