> On Feb 2, 2023, at 10:17 AM, Jon Elson via cctalk <[email protected]>
> wrote:
>
> On 2/1/23 22:10, Will Cooke via cctalk wrote:
>>
>>> On 02/01/2023 3:51 PM CST Paul Koning via cctalk <[email protected]>
>>> wrote:
>>>
>> ot sure about that. What sort of numbers are we talking about?
>>> If all else fails there's core memory, which as far as I remember is pretty
>>> much unlimited for both read and write.
>>>
>>> paul
>> I don't know for sure and can't find any references, but I strongly suspect
>> that core memory would wear out over time, as well. My reasoning for this
>> is the because in principle it works the same as FRAM. I usually refer to
>> FRAM as "core on a chip." Over time, the magnetic domains in FRAM tend to
>> stay in one polarization or another. I see no reason why the magnetic
>> domains in core wouldn't do the same. However, a single core is probably
>> bigger than the entire FRAM chip so there are a LOT more domains. That
>> means it would take a proportional amount of writes to wear out -- let's
>> just say a million times. In addition, core access was in microseconds,
>> whereas FRAM and other modern memories are in nanoseconds. So it takes
>> something like 1000 times longer on the clock on the wall to perform the
>> same number of writes. So in the end something like a billion times longer
>> on the calendar to wear it out.
>>
>> I would be very interested if anyone actually knows and especially if there
>> are references available.
>
> I have extreme doubts that this is true. Memory cores are just tiny versions
> of pulse transformers, and similar square loop transformer core materials are
> used in switching power supplies that run for decades at high switching
> frequencies.
I don't know of core wear either, but I don't think your analogy is correct.
Memory cores are not just pulse transformers, they are magnetic logic and
storage elements. And yes, they have a square hysteresis loop to do so, while
power supply transformers do not (hysteresis is not wanted in that application).
As Will pointed out, cores are fairly large storage elements, and their
switching speeds are more modest. Not necessarily quite so modest, though --
the CDC 6600 mainframes in 1964 had memory cycling at 1 MHz rate, which means
the basic operation (read and restore) takes only a few hundred nanoseconds. I
think the main limitation in that case wasn't so much the cores as rather the
difficulty of driving pulses 100 or so ns wide through a higly inductive load.
There are some unusual circuit tricks in those memories to reduce that problem
compared to the more common 4-wire designs.
paul