On Friday, May 19, 2023, at 1:27 PM, Matt Mahoney wrote:
> we knew how to do this in the 1980s but we haven't had the hardware to 
> implement human brain sized neural networks (6 x 10^14 parameters) until now.

I spoke to chatGPT-4 about a thought I had from reading this:





User
I thought computer scaling slowed down around 2012, and that today's 
supercomputers are comparable to 2012, or at least (I might be flat wrong) in 
price.


ChatGPT
The observation you are alluding to is known as "Moore's Law." Gordon Moore, 
co-founder of Intel, observed in 1965 that the number of transistors on an 
integrated circuit was doubling approximately every two years. This observation 
held true for several decades and underpinned the rapid progress in computer 
processing power that we saw during the second half of the 20th century and 
into the 21st.

However, you're right that this pace of progress has been slowing down since 
around the 2010s, for a variety of reasons, including physical limitations of 
how small we can make transistors and how many we can pack into a given area.

Despite this, there have still been significant advancements in computing power 
since 2012, although perhaps not at the same exponential pace. This is in part 
due to new architectural designs, improved energy efficiency, advances in 
parallel processing, and the development of specialized hardware like GPUs 
(Graphics Processing Units) and TPUs (Tensor Processing Units) for specific 
tasks like machine learning.

Regarding supercomputers, these machines continue to improve at a rapid pace. 
In fact, they often benefit from the advancements in parallel processing and 
energy efficiency that I mentioned earlier. For example, as of my knowledge 
cutoff in September 2021, the fastest supercomputer in the world was Fugaku in 
Japan, which has several times the performance of the fastest supercomputer in 
2012.

Cost-wise, it's a complex issue. While individual components may have become 
more expensive due to the slowing of Moore's Law, the overall cost of computing 
power (as measured in calculations per second) has generally continued to 
decrease. This is because we've gotten better at optimizing hardware and 
software to get more performance out of the same resources, and because we've 
developed new, more cost-effective technologies.

So, overall, while the pace of progress has indeed slowed from the heady days 
of the late 20th century, we're still seeing substantial advancements in 
computing technology. The challenge has shifted from simply packing more 
transistors onto a chip to finding creative ways to get more performance out of 
our hardware and software.


User
what about 2021 comapred to 1980, how much comptue power then difference was 
their best supercomputer? Or cluster group


ChatGPT
The increase in computational power from 1980 to 2021 has been nothing short of 
extraordinary.

In 1980, the fastest supercomputer was the Cray-1, developed by Cray Research. 
It was capable of 160 million floating-point operations per second (FLOPS). A 
floating-point operation is a calculation that involves numbers with fractions, 
and FLOPS are a common measure of computer performance.

Fast forward to 2021, and the fastest supercomputer at the time of my knowledge 
cutoff in September was Fugaku, developed by RIKEN and Fujitsu in Japan. Fugaku 
achieved a performance of 442 petaFLOPS on the High Performance Linpack (HPL) 
benchmark.

To put that in perspective, one petaFLOP is equal to one quadrillion (or a 
thousand trillion) FLOPS. So, Fugaku's performance is nearly 3 billion times 
that of the Cray-1.

That's an incredible increase in computational power. It's worth noting that 
this doesn't even take into account many other advances, such as in storage 
capacity, network speed, and software efficiency, which have also seen 
substantial improvements over the same period.

Clusters, which group together multiple computers to work on tasks in parallel, 
have also seen massive increases in performance. The world's largest and 
fastest computing clusters in 2021 were capable of performance in the exaFLOP 
range (one exaFLOP is a thousand petaFLOPS, or a quintillion FLOPS).

So, the difference in compute power between the best supercomputers or clusters 
in 1980 compared to 2021 is staggering, reflecting over four decades of 
technological advancement.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te1610d7fc26c4586-M687e8aac50100a121c1ec023
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to